May 27 03:19:29.894956 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:19:29.894981 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:19:29.894992 kernel: BIOS-provided physical RAM map: May 27 03:19:29.894999 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:19:29.895006 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:19:29.895013 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:19:29.895021 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:19:29.895027 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:19:29.895041 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:19:29.895048 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:19:29.895054 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 27 03:19:29.895061 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:19:29.895068 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:19:29.895077 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:19:29.895091 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:19:29.895101 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:19:29.895116 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:19:29.895125 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:19:29.895135 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:19:29.895142 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:19:29.895149 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:19:29.895159 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:19:29.895168 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:19:29.895178 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:19:29.895187 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:19:29.895201 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:19:29.895211 kernel: NX (Execute Disable) protection: active May 27 03:19:29.895220 kernel: APIC: Static calls initialized May 27 03:19:29.895230 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 27 03:19:29.895240 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 27 03:19:29.895250 kernel: extended physical RAM map: May 27 03:19:29.895269 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:19:29.895279 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:19:29.895289 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:19:29.895299 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:19:29.895308 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:19:29.895322 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:19:29.895332 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:19:29.895342 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 27 03:19:29.895352 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 27 03:19:29.895368 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 27 03:19:29.895378 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 27 03:19:29.895391 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 27 03:19:29.895402 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:19:29.895412 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:19:29.895423 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:19:29.895433 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:19:29.895444 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:19:29.895454 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:19:29.895464 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:19:29.895474 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:19:29.895489 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:19:29.895499 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:19:29.895509 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:19:29.895520 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:19:29.895530 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:19:29.895540 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:19:29.895550 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:19:29.895563 kernel: efi: EFI v2.7 by EDK II May 27 03:19:29.895572 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 27 03:19:29.895579 kernel: random: crng init done May 27 03:19:29.895589 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 27 03:19:29.895597 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 27 03:19:29.895610 kernel: secureboot: Secure boot disabled May 27 03:19:29.895618 kernel: SMBIOS 2.8 present. May 27 03:19:29.895625 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 03:19:29.895633 kernel: DMI: Memory slots populated: 1/1 May 27 03:19:29.895640 kernel: Hypervisor detected: KVM May 27 03:19:29.895648 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:19:29.895658 kernel: kvm-clock: using sched offset of 4957002886 cycles May 27 03:19:29.895668 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:19:29.895679 kernel: tsc: Detected 2794.748 MHz processor May 27 03:19:29.895690 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:19:29.895700 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:19:29.895714 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 27 03:19:29.895725 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:19:29.895736 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:19:29.895746 kernel: Using GB pages for direct mapping May 27 03:19:29.895757 kernel: ACPI: Early table checksum verification disabled May 27 03:19:29.895768 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 27 03:19:29.895779 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 03:19:29.895789 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:19:29.895800 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:19:29.895814 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 27 03:19:29.895824 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:19:29.895833 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:19:29.895840 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:19:29.895848 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:19:29.895858 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 03:19:29.895868 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 27 03:19:29.895878 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] May 27 03:19:29.895892 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 27 03:19:29.895903 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 27 03:19:29.895931 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 27 03:19:29.895941 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 27 03:19:29.895951 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 27 03:19:29.895960 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 27 03:19:29.895968 kernel: No NUMA configuration found May 27 03:19:29.895975 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 27 03:19:29.895983 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 27 03:19:29.895991 kernel: Zone ranges: May 27 03:19:29.896002 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:19:29.896010 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 27 03:19:29.896020 kernel: Normal empty May 27 03:19:29.896030 kernel: Device empty May 27 03:19:29.896040 kernel: Movable zone start for each node May 27 03:19:29.896051 kernel: Early memory node ranges May 27 03:19:29.896061 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 03:19:29.896071 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 27 03:19:29.896086 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 27 03:19:29.896100 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 27 03:19:29.896110 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 27 03:19:29.896120 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 27 03:19:29.896130 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 27 03:19:29.896140 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 27 03:19:29.896150 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 27 03:19:29.896164 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:19:29.896174 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 03:19:29.896195 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 27 03:19:29.896203 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:19:29.896211 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 27 03:19:29.896219 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 27 03:19:29.896229 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 03:19:29.896237 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 03:19:29.896245 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 27 03:19:29.896261 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:19:29.896269 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:19:29.896280 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:19:29.896289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:19:29.896297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:19:29.896305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:19:29.896313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:19:29.896321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:19:29.896329 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:19:29.896337 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:19:29.896345 kernel: TSC deadline timer available May 27 03:19:29.896355 kernel: CPU topo: Max. logical packages: 1 May 27 03:19:29.896363 kernel: CPU topo: Max. logical dies: 1 May 27 03:19:29.896371 kernel: CPU topo: Max. dies per package: 1 May 27 03:19:29.896379 kernel: CPU topo: Max. threads per core: 1 May 27 03:19:29.896387 kernel: CPU topo: Num. cores per package: 4 May 27 03:19:29.896395 kernel: CPU topo: Num. threads per package: 4 May 27 03:19:29.896403 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:19:29.896411 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:19:29.896419 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:19:29.896429 kernel: kvm-guest: setup PV sched yield May 27 03:19:29.896437 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 03:19:29.896445 kernel: Booting paravirtualized kernel on KVM May 27 03:19:29.896453 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:19:29.896461 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:19:29.896469 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:19:29.896478 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:19:29.896486 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:19:29.896493 kernel: kvm-guest: PV spinlocks enabled May 27 03:19:29.896504 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:19:29.896514 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:19:29.896525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:19:29.896533 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:19:29.896541 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:19:29.896549 kernel: Fallback order for Node 0: 0 May 27 03:19:29.896557 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 27 03:19:29.896565 kernel: Policy zone: DMA32 May 27 03:19:29.896576 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:19:29.896584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:19:29.896592 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:19:29.896603 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:19:29.896613 kernel: Dynamic Preempt: voluntary May 27 03:19:29.896624 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:19:29.896636 kernel: rcu: RCU event tracing is enabled. May 27 03:19:29.896647 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:19:29.896659 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:19:29.896674 kernel: Rude variant of Tasks RCU enabled. May 27 03:19:29.896685 kernel: Tracing variant of Tasks RCU enabled. May 27 03:19:29.896696 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:19:29.896711 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:19:29.896723 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:19:29.896733 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:19:29.896744 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:19:29.896756 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:19:29.896767 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:19:29.896783 kernel: Console: colour dummy device 80x25 May 27 03:19:29.896794 kernel: printk: legacy console [ttyS0] enabled May 27 03:19:29.896804 kernel: ACPI: Core revision 20240827 May 27 03:19:29.896814 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:19:29.896822 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:19:29.896830 kernel: x2apic enabled May 27 03:19:29.896838 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:19:29.896846 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:19:29.896854 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:19:29.896865 kernel: kvm-guest: setup PV IPIs May 27 03:19:29.896873 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:19:29.896881 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:19:29.896890 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:19:29.896898 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:19:29.896921 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:19:29.896930 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:19:29.896938 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:19:29.896946 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:19:29.896957 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:19:29.896965 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:19:29.896973 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:19:29.896981 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:19:29.896992 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:19:29.897000 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:19:29.897009 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:19:29.897017 kernel: x86/bugs: return thunk changed May 27 03:19:29.897025 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:19:29.897036 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:19:29.897046 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:19:29.897056 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:19:29.897067 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:19:29.897079 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:19:29.897089 kernel: Freeing SMP alternatives memory: 32K May 27 03:19:29.897101 kernel: pid_max: default: 32768 minimum: 301 May 27 03:19:29.897112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:19:29.897126 kernel: landlock: Up and running. May 27 03:19:29.897137 kernel: SELinux: Initializing. May 27 03:19:29.897148 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:19:29.897159 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:19:29.897170 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:19:29.897181 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:19:29.897190 kernel: ... version: 0 May 27 03:19:29.897198 kernel: ... bit width: 48 May 27 03:19:29.897206 kernel: ... generic registers: 6 May 27 03:19:29.897214 kernel: ... value mask: 0000ffffffffffff May 27 03:19:29.897225 kernel: ... max period: 00007fffffffffff May 27 03:19:29.897233 kernel: ... fixed-purpose events: 0 May 27 03:19:29.897241 kernel: ... event mask: 000000000000003f May 27 03:19:29.897249 kernel: signal: max sigframe size: 1776 May 27 03:19:29.897267 kernel: rcu: Hierarchical SRCU implementation. May 27 03:19:29.897276 kernel: rcu: Max phase no-delay instances is 400. May 27 03:19:29.897296 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:19:29.897309 kernel: smp: Bringing up secondary CPUs ... May 27 03:19:29.897320 kernel: smpboot: x86: Booting SMP configuration: May 27 03:19:29.897336 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:19:29.897347 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:19:29.897357 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:19:29.897368 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137196K reserved, 0K cma-reserved) May 27 03:19:29.897379 kernel: devtmpfs: initialized May 27 03:19:29.897388 kernel: x86/mm: Memory block size: 128MB May 27 03:19:29.897397 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 27 03:19:29.897408 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 27 03:19:29.897419 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 27 03:19:29.897435 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 27 03:19:29.897446 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 27 03:19:29.897457 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 27 03:19:29.897468 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:19:29.897478 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:19:29.897486 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:19:29.897494 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:19:29.897502 kernel: audit: initializing netlink subsys (disabled) May 27 03:19:29.897514 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:19:29.897522 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:19:29.897531 kernel: audit: type=2000 audit(1748315967.309:1): state=initialized audit_enabled=0 res=1 May 27 03:19:29.897548 kernel: cpuidle: using governor menu May 27 03:19:29.897560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:19:29.897572 kernel: dca service started, version 1.12.1 May 27 03:19:29.897583 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 03:19:29.897595 kernel: PCI: Using configuration type 1 for base access May 27 03:19:29.897606 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:19:29.897623 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:19:29.897634 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:19:29.897652 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:19:29.897664 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:19:29.897675 kernel: ACPI: Added _OSI(Module Device) May 27 03:19:29.897686 kernel: ACPI: Added _OSI(Processor Device) May 27 03:19:29.897697 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:19:29.897707 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:19:29.897717 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:19:29.897733 kernel: ACPI: Interpreter enabled May 27 03:19:29.897743 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:19:29.897754 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:19:29.897765 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:19:29.897776 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:19:29.897787 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:19:29.897798 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:19:29.898108 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:19:29.898288 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:19:29.898437 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:19:29.898449 kernel: PCI host bridge to bus 0000:00 May 27 03:19:29.898577 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:19:29.898690 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:19:29.898841 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:19:29.899016 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 03:19:29.899140 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 03:19:29.899251 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 03:19:29.899375 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:19:29.899550 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:19:29.899709 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:19:29.899858 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 03:19:29.900052 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 03:19:29.900228 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:19:29.900418 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:19:29.900673 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:19:29.900846 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 03:19:29.901044 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 03:19:29.901204 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 03:19:29.901392 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:19:29.901564 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 03:19:29.901725 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 03:19:29.901860 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 03:19:29.902014 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:19:29.902139 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 03:19:29.902296 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 03:19:29.902484 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 03:19:29.902687 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 03:19:29.902926 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:19:29.903147 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:19:29.903339 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:19:29.903509 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 03:19:29.903673 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 03:19:29.903851 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:19:29.904068 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 03:19:29.904086 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:19:29.904098 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:19:29.904109 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:19:29.904119 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:19:29.904130 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:19:29.904141 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:19:29.904159 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:19:29.904171 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:19:29.904182 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:19:29.904192 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:19:29.904204 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:19:29.904215 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:19:29.904227 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:19:29.904238 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:19:29.904249 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:19:29.904277 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:19:29.904289 kernel: iommu: Default domain type: Translated May 27 03:19:29.904301 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:19:29.904312 kernel: efivars: Registered efivars operations May 27 03:19:29.904324 kernel: PCI: Using ACPI for IRQ routing May 27 03:19:29.904335 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:19:29.904347 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 27 03:19:29.904358 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 27 03:19:29.904369 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 27 03:19:29.904385 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 27 03:19:29.904396 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 27 03:19:29.904407 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 27 03:19:29.904419 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 27 03:19:29.904431 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 27 03:19:29.904606 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:19:29.904772 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:19:29.904963 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:19:29.904987 kernel: vgaarb: loaded May 27 03:19:29.904999 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:19:29.905011 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:19:29.905023 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:19:29.905034 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:19:29.905046 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:19:29.905058 kernel: pnp: PnP ACPI init May 27 03:19:29.905304 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 03:19:29.905332 kernel: pnp: PnP ACPI: found 6 devices May 27 03:19:29.905344 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:19:29.905357 kernel: NET: Registered PF_INET protocol family May 27 03:19:29.905368 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:19:29.905381 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:19:29.905393 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:19:29.905423 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:19:29.905454 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:19:29.905473 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:19:29.905485 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:19:29.905496 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:19:29.905508 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:19:29.905520 kernel: NET: Registered PF_XDP protocol family May 27 03:19:29.905707 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 03:19:29.905881 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 03:19:29.906067 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:19:29.906219 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:19:29.906388 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:19:29.906626 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 03:19:29.906782 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 03:19:29.906945 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 03:19:29.906961 kernel: PCI: CLS 0 bytes, default 64 May 27 03:19:29.906971 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:19:29.906980 kernel: Initialise system trusted keyrings May 27 03:19:29.906988 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:19:29.907003 kernel: Key type asymmetric registered May 27 03:19:29.907014 kernel: Asymmetric key parser 'x509' registered May 27 03:19:29.907023 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:19:29.907032 kernel: io scheduler mq-deadline registered May 27 03:19:29.907040 kernel: io scheduler kyber registered May 27 03:19:29.907049 kernel: io scheduler bfq registered May 27 03:19:29.907060 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:19:29.907069 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:19:29.907078 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:19:29.907087 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:19:29.907095 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:19:29.907106 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:19:29.907123 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:19:29.907134 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:19:29.907145 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:19:29.907340 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:19:29.907515 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:19:29.907669 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:19:29 UTC (1748315969) May 27 03:19:29.907809 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 03:19:29.907825 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:19:29.907837 kernel: efifb: probing for efifb May 27 03:19:29.907849 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 03:19:29.907861 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 03:19:29.907877 kernel: efifb: scrolling: redraw May 27 03:19:29.907888 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:19:29.907899 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 27 03:19:29.907931 kernel: Console: switching to colour frame buffer device 160x50 May 27 03:19:29.907943 kernel: fb0: EFI VGA frame buffer device May 27 03:19:29.907955 kernel: pstore: Using crash dump compression: deflate May 27 03:19:29.907966 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:19:29.907977 kernel: NET: Registered PF_INET6 protocol family May 27 03:19:29.907988 kernel: Segment Routing with IPv6 May 27 03:19:29.908007 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:19:29.908024 kernel: NET: Registered PF_PACKET protocol family May 27 03:19:29.908036 kernel: Key type dns_resolver registered May 27 03:19:29.908047 kernel: IPI shorthand broadcast: enabled May 27 03:19:29.908059 kernel: sched_clock: Marking stable (3642005800, 172847077)->(3851104405, -36251528) May 27 03:19:29.908070 kernel: registered taskstats version 1 May 27 03:19:29.908081 kernel: Loading compiled-in X.509 certificates May 27 03:19:29.908094 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:19:29.908104 kernel: Demotion targets for Node 0: null May 27 03:19:29.908120 kernel: Key type .fscrypt registered May 27 03:19:29.908131 kernel: Key type fscrypt-provisioning registered May 27 03:19:29.908141 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:19:29.908153 kernel: ima: Allocated hash algorithm: sha1 May 27 03:19:29.908165 kernel: ima: No architecture policies found May 27 03:19:29.908177 kernel: clk: Disabling unused clocks May 27 03:19:29.908193 kernel: Warning: unable to open an initial console. May 27 03:19:29.908206 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:19:29.908223 kernel: Write protecting the kernel read-only data: 24576k May 27 03:19:29.908234 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:19:29.908246 kernel: Run /init as init process May 27 03:19:29.908269 kernel: with arguments: May 27 03:19:29.908281 kernel: /init May 27 03:19:29.908293 kernel: with environment: May 27 03:19:29.908310 kernel: HOME=/ May 27 03:19:29.908322 kernel: TERM=linux May 27 03:19:29.908334 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:19:29.908347 systemd[1]: Successfully made /usr/ read-only. May 27 03:19:29.908370 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:19:29.908383 systemd[1]: Detected virtualization kvm. May 27 03:19:29.908396 systemd[1]: Detected architecture x86-64. May 27 03:19:29.908408 systemd[1]: Running in initrd. May 27 03:19:29.908419 systemd[1]: No hostname configured, using default hostname. May 27 03:19:29.908429 systemd[1]: Hostname set to . May 27 03:19:29.908438 systemd[1]: Initializing machine ID from VM UUID. May 27 03:19:29.908451 systemd[1]: Queued start job for default target initrd.target. May 27 03:19:29.908460 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:19:29.908470 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:19:29.908480 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:19:29.908489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:19:29.908499 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:19:29.908509 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:19:29.908523 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:19:29.908532 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:19:29.908541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:19:29.908550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:19:29.908559 systemd[1]: Reached target paths.target - Path Units. May 27 03:19:29.908568 systemd[1]: Reached target slices.target - Slice Units. May 27 03:19:29.908577 systemd[1]: Reached target swap.target - Swaps. May 27 03:19:29.908586 systemd[1]: Reached target timers.target - Timer Units. May 27 03:19:29.908597 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:19:29.908607 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:19:29.908616 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:19:29.908628 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:19:29.908637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:19:29.908647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:19:29.908656 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:19:29.908665 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:19:29.908676 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:19:29.908685 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:19:29.908697 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:19:29.908707 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:19:29.908716 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:19:29.908725 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:19:29.908734 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:19:29.908743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:29.908753 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:19:29.908765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:19:29.908810 systemd-journald[220]: Collecting audit messages is disabled. May 27 03:19:29.908836 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:19:29.908847 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:19:29.908860 systemd-journald[220]: Journal started May 27 03:19:29.908884 systemd-journald[220]: Runtime Journal (/run/log/journal/e22a8dfb880242e796058ec76fd205e4) is 6M, max 48.5M, 42.4M free. May 27 03:19:29.904083 systemd-modules-load[222]: Inserted module 'overlay' May 27 03:19:29.926194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:29.928931 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:19:29.933523 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:19:29.936889 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:19:29.937040 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:19:29.940039 kernel: Bridge firewalling registered May 27 03:19:29.937570 systemd-modules-load[222]: Inserted module 'br_netfilter' May 27 03:19:29.942380 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:19:29.943663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:19:29.945547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:19:29.947369 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:19:29.958984 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:19:29.962652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:19:29.967580 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:19:29.968534 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:19:29.971623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:19:29.990292 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:19:29.993200 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:19:30.025534 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:19:30.034216 systemd-resolved[259]: Positive Trust Anchors: May 27 03:19:30.034234 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:19:30.034302 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:19:30.037814 systemd-resolved[259]: Defaulting to hostname 'linux'. May 27 03:19:30.039464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:19:30.044775 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:19:30.144980 kernel: SCSI subsystem initialized May 27 03:19:30.154964 kernel: Loading iSCSI transport class v2.0-870. May 27 03:19:30.165964 kernel: iscsi: registered transport (tcp) May 27 03:19:30.193395 kernel: iscsi: registered transport (qla4xxx) May 27 03:19:30.193434 kernel: QLogic iSCSI HBA Driver May 27 03:19:30.217843 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:19:30.242367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:19:30.251810 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:19:30.313743 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:19:30.340000 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:19:30.416018 kernel: raid6: avx2x4 gen() 21793 MB/s May 27 03:19:30.433004 kernel: raid6: avx2x2 gen() 21032 MB/s May 27 03:19:30.450336 kernel: raid6: avx2x1 gen() 17701 MB/s May 27 03:19:30.450435 kernel: raid6: using algorithm avx2x4 gen() 21793 MB/s May 27 03:19:30.468362 kernel: raid6: .... xor() 5950 MB/s, rmw enabled May 27 03:19:30.468430 kernel: raid6: using avx2x2 recovery algorithm May 27 03:19:30.494961 kernel: xor: automatically using best checksumming function avx May 27 03:19:30.696961 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:19:30.708002 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:19:30.710368 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:19:30.747973 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 27 03:19:30.753613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:19:30.756965 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:19:30.796209 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation May 27 03:19:30.831953 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:19:30.836213 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:19:30.928297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:19:30.938365 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:19:30.984305 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:19:30.984749 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:19:30.986947 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:19:30.989146 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:19:30.989190 kernel: GPT:9289727 != 19775487 May 27 03:19:30.989205 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:19:30.992368 kernel: GPT:9289727 != 19775487 May 27 03:19:30.992395 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:19:30.992420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:19:31.004942 kernel: libata version 3.00 loaded. May 27 03:19:31.008945 kernel: AES CTR mode by8 optimization enabled May 27 03:19:31.012957 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 27 03:19:31.016808 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:19:31.018209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:31.020974 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:31.026053 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:31.029392 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:19:31.029616 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:19:31.027711 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:19:31.038862 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:19:31.039118 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:19:31.039284 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:19:31.045628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:19:31.048014 kernel: scsi host0: ahci May 27 03:19:31.046439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:31.052663 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:19:31.055071 kernel: scsi host1: ahci May 27 03:19:31.056947 kernel: scsi host2: ahci May 27 03:19:31.059128 kernel: scsi host3: ahci May 27 03:19:31.059381 kernel: scsi host4: ahci May 27 03:19:31.062993 kernel: scsi host5: ahci May 27 03:19:31.063239 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 03:19:31.063267 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 03:19:31.065932 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 03:19:31.068932 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 03:19:31.068959 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 03:19:31.068975 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 03:19:31.086291 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:19:31.097876 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:19:31.109710 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:19:31.119035 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:19:31.122339 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:19:31.123508 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:19:31.126958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:31.167401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:31.171646 disk-uuid[631]: Primary Header is updated. May 27 03:19:31.171646 disk-uuid[631]: Secondary Entries is updated. May 27 03:19:31.171646 disk-uuid[631]: Secondary Header is updated. May 27 03:19:31.177045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:19:31.180955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:19:31.382573 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:19:31.382668 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:19:31.382682 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:19:31.383935 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:19:31.384945 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:19:31.385951 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:19:31.385976 kernel: ata3.00: applying bridge limits May 27 03:19:31.386973 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:19:31.387075 kernel: ata3.00: configured for UDMA/100 May 27 03:19:31.388945 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:19:31.439967 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:19:31.440331 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:19:31.456968 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:19:31.899171 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:19:31.899789 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:19:31.946850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:19:31.947144 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:19:31.948584 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:19:31.982384 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:19:32.183859 disk-uuid[636]: The operation has completed successfully. May 27 03:19:32.185376 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:19:32.220734 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:19:32.220862 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:19:32.253000 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:19:32.280638 sh[666]: Success May 27 03:19:32.300895 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:19:32.300969 kernel: device-mapper: uevent: version 1.0.3 May 27 03:19:32.300987 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:19:32.310952 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:19:32.342003 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:19:32.344277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:19:32.358246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:19:32.364548 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:19:32.364577 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (678) May 27 03:19:32.365847 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:19:32.366747 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:19:32.366769 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:19:32.371584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:19:32.372974 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:19:32.374519 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:19:32.375312 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:19:32.377108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:19:32.399971 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (711) May 27 03:19:32.400032 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:19:32.401947 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:19:32.401970 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:19:32.408957 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:19:32.410238 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:19:32.421608 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:19:32.507502 ignition[768]: Ignition 2.21.0 May 27 03:19:32.507885 ignition[768]: Stage: fetch-offline May 27 03:19:32.507932 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 27 03:19:32.507942 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:19:32.508022 ignition[768]: parsed url from cmdline: "" May 27 03:19:32.508026 ignition[768]: no config URL provided May 27 03:19:32.508031 ignition[768]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:19:32.508040 ignition[768]: no config at "/usr/lib/ignition/user.ign" May 27 03:19:32.508062 ignition[768]: op(1): [started] loading QEMU firmware config module May 27 03:19:32.515810 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:19:32.508067 ignition[768]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:19:32.515359 ignition[768]: op(1): [finished] loading QEMU firmware config module May 27 03:19:32.521707 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:19:32.562264 ignition[768]: parsing config with SHA512: 9e39d754c15611502265b4466486eb7496e10c3681a61e47c0bd7b4ce2d7c520a5bd706776ae39110f0789754b1072c4424b30d11ab981bbc18b9f3ead4f956a May 27 03:19:32.566333 unknown[768]: fetched base config from "system" May 27 03:19:32.566348 unknown[768]: fetched user config from "qemu" May 27 03:19:32.570401 ignition[768]: fetch-offline: fetch-offline passed May 27 03:19:32.570489 ignition[768]: Ignition finished successfully May 27 03:19:32.571321 systemd-networkd[856]: lo: Link UP May 27 03:19:32.571327 systemd-networkd[856]: lo: Gained carrier May 27 03:19:32.573297 systemd-networkd[856]: Enumeration completed May 27 03:19:32.573810 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:19:32.573816 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:19:32.574367 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:19:32.575164 systemd-networkd[856]: eth0: Link UP May 27 03:19:32.575169 systemd-networkd[856]: eth0: Gained carrier May 27 03:19:32.575180 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:19:32.576344 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:19:32.578393 systemd[1]: Reached target network.target - Network. May 27 03:19:32.579475 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:19:32.580947 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:19:32.599967 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:19:32.637133 ignition[860]: Ignition 2.21.0 May 27 03:19:32.637148 ignition[860]: Stage: kargs May 27 03:19:32.637288 ignition[860]: no configs at "/usr/lib/ignition/base.d" May 27 03:19:32.637299 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:19:32.637980 ignition[860]: kargs: kargs passed May 27 03:19:32.638030 ignition[860]: Ignition finished successfully May 27 03:19:32.643829 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:19:32.646369 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:19:32.672313 ignition[869]: Ignition 2.21.0 May 27 03:19:32.672326 ignition[869]: Stage: disks May 27 03:19:32.672464 ignition[869]: no configs at "/usr/lib/ignition/base.d" May 27 03:19:32.672475 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:19:32.673127 ignition[869]: disks: disks passed May 27 03:19:32.684093 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:19:32.673171 ignition[869]: Ignition finished successfully May 27 03:19:32.685660 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:19:32.686264 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:19:32.686651 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:19:32.691956 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:19:32.694366 systemd[1]: Reached target basic.target - Basic System. May 27 03:19:32.699252 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:19:32.740170 systemd-resolved[259]: Detected conflict on linux IN A 10.0.0.89 May 27 03:19:32.740197 systemd-resolved[259]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. May 27 03:19:32.741620 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:19:33.062937 systemd-resolved[259]: Detected conflict on linux7 IN A 10.0.0.89 May 27 03:19:33.062958 systemd-resolved[259]: Hostname conflict, changing published hostname from 'linux7' to 'linux11'. May 27 03:19:33.081170 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:19:33.084030 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:19:33.198965 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:19:33.199750 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:19:33.200497 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:19:33.201901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:19:33.204297 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:19:33.205740 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:19:33.205785 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:19:33.205809 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:19:33.219640 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:19:33.223463 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:19:33.228601 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (887) May 27 03:19:33.228631 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:19:33.228646 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:19:33.228659 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:19:33.232717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:19:33.276940 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:19:33.282870 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory May 27 03:19:33.287383 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:19:33.291247 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:19:33.384703 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:19:33.387313 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:19:33.389007 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:19:33.418088 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:19:33.420945 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:19:33.433054 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:19:33.448787 ignition[1002]: INFO : Ignition 2.21.0 May 27 03:19:33.448787 ignition[1002]: INFO : Stage: mount May 27 03:19:33.450696 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:19:33.450696 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:19:33.453110 ignition[1002]: INFO : mount: mount passed May 27 03:19:33.453906 ignition[1002]: INFO : Ignition finished successfully May 27 03:19:33.457319 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:19:33.460342 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:19:33.491374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:19:33.514269 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1014) May 27 03:19:33.514301 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:19:33.514313 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:19:33.515160 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:19:33.519320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:19:33.552007 ignition[1031]: INFO : Ignition 2.21.0 May 27 03:19:33.552007 ignition[1031]: INFO : Stage: files May 27 03:19:33.553975 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:19:33.553975 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:19:33.556447 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping May 27 03:19:33.557724 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:19:33.557724 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:19:33.560746 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:19:33.560746 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:19:33.560746 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:19:33.559338 unknown[1031]: wrote ssh authorized keys file for user: core May 27 03:19:33.566178 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 03:19:33.566178 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 27 03:19:33.604929 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:19:33.768513 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 27 03:19:33.768513 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:19:33.772949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:19:33.772949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:19:33.772949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:19:33.772949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:19:33.772949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:19:33.772949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:19:33.772949 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:19:33.787280 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:19:33.787280 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:19:33.787280 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:33.787280 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:33.787280 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:33.787280 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 27 03:19:34.239171 systemd-networkd[856]: eth0: Gained IPv6LL May 27 03:19:34.478758 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:19:34.874268 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 27 03:19:34.874268 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:19:34.879006 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:19:34.911932 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:19:34.911932 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:19:34.931118 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:19:34.931118 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:19:34.931118 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:19:34.931118 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:19:34.931118 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:19:34.947978 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:19:34.954123 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:19:34.955742 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:19:34.955742 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 03:19:34.955742 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:19:34.955742 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:19:34.955742 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:19:34.955742 ignition[1031]: INFO : files: files passed May 27 03:19:34.955742 ignition[1031]: INFO : Ignition finished successfully May 27 03:19:34.959660 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:19:34.962696 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:19:34.967155 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:19:34.991539 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:19:34.991704 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:19:34.993252 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:19:34.996274 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:19:34.996274 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:19:34.999678 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:19:35.000044 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:19:35.018983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:19:35.022274 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:19:35.103987 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:19:35.104118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:19:35.105466 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:19:35.107844 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:19:35.110820 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:19:35.111710 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:19:35.148093 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:19:35.149847 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:19:35.179565 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:19:35.179843 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:19:35.183862 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:19:35.185270 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:19:35.185465 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:19:35.191039 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:19:35.193465 systemd[1]: Stopped target basic.target - Basic System. May 27 03:19:35.194687 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:19:35.195805 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:19:35.198292 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:19:35.200873 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:19:35.203368 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:19:35.205873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:19:35.208218 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:19:35.211047 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:19:35.214389 systemd[1]: Stopped target swap.target - Swaps. May 27 03:19:35.215585 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:19:35.215747 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:19:35.217683 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:19:35.218245 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:19:35.218532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:19:35.224477 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:19:35.225531 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:19:35.225676 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:19:35.231180 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:19:35.231340 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:19:35.232518 systemd[1]: Stopped target paths.target - Path Units. May 27 03:19:35.234606 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:19:35.239995 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:19:35.240214 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:19:35.243201 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:19:35.245040 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:19:35.245179 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:19:35.247188 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:19:35.247304 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:19:35.249260 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:19:35.249419 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:19:35.251397 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:19:35.251544 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:19:35.259090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:19:35.260834 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:19:35.262658 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:19:35.262885 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:19:35.265253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:19:35.265469 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:19:35.272999 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:19:35.274069 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:19:35.290684 ignition[1087]: INFO : Ignition 2.21.0 May 27 03:19:35.290684 ignition[1087]: INFO : Stage: umount May 27 03:19:35.293429 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:19:35.293429 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:19:35.295936 ignition[1087]: INFO : umount: umount passed May 27 03:19:35.295936 ignition[1087]: INFO : Ignition finished successfully May 27 03:19:35.297515 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:19:35.298407 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:19:35.298561 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:19:35.301493 systemd[1]: Stopped target network.target - Network. May 27 03:19:35.302201 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:19:35.302304 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:19:35.304099 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:19:35.304186 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:19:35.307180 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:19:35.307254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:19:35.310449 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:19:35.310507 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:19:35.312645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:19:35.314657 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:19:35.319046 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:19:35.319244 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:19:35.323173 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:19:35.323492 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:19:35.323549 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:19:35.327208 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:19:35.336805 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:19:35.336984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:19:35.340963 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:19:35.341199 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:19:35.342360 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:19:35.342406 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:19:35.346443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:19:35.348381 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:19:35.348450 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:19:35.350775 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:19:35.350836 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:19:35.354205 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:19:35.354266 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:19:35.355455 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:19:35.359771 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:19:35.370639 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:19:35.377100 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:19:35.377545 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:19:35.377597 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:19:35.379890 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:19:35.379950 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:19:35.380370 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:19:35.380427 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:19:35.381200 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:19:35.381255 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:19:35.381961 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:19:35.382016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:19:35.409045 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:19:35.411165 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:19:35.411239 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:19:35.414821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:19:35.414881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:19:35.419471 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:19:35.419532 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:19:35.423036 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:19:35.423093 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:19:35.425692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:19:35.425754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:35.429706 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:19:35.429854 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:19:35.433075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:19:35.433226 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:19:35.498825 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:19:35.498985 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:19:35.501112 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:19:35.502829 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:19:35.502887 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:19:35.506181 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:19:35.532262 systemd[1]: Switching root. May 27 03:19:35.570673 systemd-journald[220]: Journal stopped May 27 03:19:36.882812 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 03:19:36.882898 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:19:36.882944 kernel: SELinux: policy capability open_perms=1 May 27 03:19:36.882957 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:19:36.882969 kernel: SELinux: policy capability always_check_network=0 May 27 03:19:36.882980 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:19:36.882991 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:19:36.883003 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:19:36.883018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:19:36.883035 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:19:36.883047 kernel: audit: type=1403 audit(1748315976.019:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:19:36.883059 systemd[1]: Successfully loaded SELinux policy in 49.033ms. May 27 03:19:36.883093 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.492ms. May 27 03:19:36.883118 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:19:36.883130 systemd[1]: Detected virtualization kvm. May 27 03:19:36.883143 systemd[1]: Detected architecture x86-64. May 27 03:19:36.883157 systemd[1]: Detected first boot. May 27 03:19:36.883169 systemd[1]: Initializing machine ID from VM UUID. May 27 03:19:36.883182 zram_generator::config[1133]: No configuration found. May 27 03:19:36.883195 kernel: Guest personality initialized and is inactive May 27 03:19:36.883207 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:19:36.883218 kernel: Initialized host personality May 27 03:19:36.883229 kernel: NET: Registered PF_VSOCK protocol family May 27 03:19:36.883241 systemd[1]: Populated /etc with preset unit settings. May 27 03:19:36.883254 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:19:36.883271 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:19:36.883288 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:19:36.883301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:19:36.883314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:19:36.883326 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:19:36.883338 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:19:36.883351 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:19:36.883364 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:19:36.883378 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:19:36.883390 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:19:36.883402 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:19:36.883415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:19:36.883427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:19:36.883439 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:19:36.883451 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:19:36.883463 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:19:36.883478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:19:36.883490 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:19:36.883502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:19:36.883514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:19:36.883526 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:19:36.883539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:19:36.883552 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:19:36.883565 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:19:36.883577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:19:36.883591 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:19:36.883604 systemd[1]: Reached target slices.target - Slice Units. May 27 03:19:36.883616 systemd[1]: Reached target swap.target - Swaps. May 27 03:19:36.883628 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:19:36.883640 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:19:36.883652 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:19:36.883664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:19:36.883676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:19:36.883689 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:19:36.883700 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:19:36.883714 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:19:36.883726 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:19:36.883738 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:19:36.883751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:36.883763 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:19:36.883775 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:19:36.883787 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:19:36.883800 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:19:36.883815 systemd[1]: Reached target machines.target - Containers. May 27 03:19:36.883827 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:19:36.883839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:36.883854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:19:36.883866 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:19:36.883878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:19:36.883890 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:19:36.883901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:19:36.883937 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:19:36.883954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:19:36.883971 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:19:36.883986 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:19:36.884002 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:19:36.884019 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:19:36.884036 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:19:36.884054 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:36.884075 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:19:36.884104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:19:36.884121 kernel: fuse: init (API version 7.41) May 27 03:19:36.884137 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:19:36.884156 kernel: loop: module loaded May 27 03:19:36.884177 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:19:36.884194 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:19:36.884212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:19:36.884233 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:19:36.884249 systemd[1]: Stopped verity-setup.service. May 27 03:19:36.884267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:36.884289 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:19:36.884305 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:19:36.884321 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:19:36.884336 kernel: ACPI: bus type drm_connector registered May 27 03:19:36.884351 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:19:36.884366 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:19:36.884412 systemd-journald[1211]: Collecting audit messages is disabled. May 27 03:19:36.884446 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:19:36.884468 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:19:36.884485 systemd-journald[1211]: Journal started May 27 03:19:36.884515 systemd-journald[1211]: Runtime Journal (/run/log/journal/e22a8dfb880242e796058ec76fd205e4) is 6M, max 48.5M, 42.4M free. May 27 03:19:36.577956 systemd[1]: Queued start job for default target multi-user.target. May 27 03:19:36.603323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:19:36.603835 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:19:36.889259 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:19:36.890310 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:19:36.892104 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:19:36.892337 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:19:36.894074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:19:36.894307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:19:36.896103 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:19:36.896332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:19:36.897901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:19:36.898350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:19:36.900153 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:19:36.900366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:19:36.901980 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:19:36.902239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:19:36.903879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:19:36.905537 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:19:36.907323 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:19:36.909196 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:19:36.926901 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:19:36.930248 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:19:36.932639 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:19:36.934296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:19:36.934338 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:19:36.937226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:19:36.941324 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:19:36.943805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:36.945561 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:19:36.948868 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:19:36.950354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:19:36.952137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:19:36.953538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:19:36.955064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:19:36.959136 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:19:36.965528 systemd-journald[1211]: Time spent on flushing to /var/log/journal/e22a8dfb880242e796058ec76fd205e4 is 26.319ms for 1069 entries. May 27 03:19:36.965528 systemd-journald[1211]: System Journal (/var/log/journal/e22a8dfb880242e796058ec76fd205e4) is 8M, max 195.6M, 187.6M free. May 27 03:19:37.022793 systemd-journald[1211]: Received client request to flush runtime journal. May 27 03:19:37.022865 kernel: loop0: detected capacity change from 0 to 113872 May 27 03:19:36.970631 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:19:36.974028 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:19:36.975616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:19:36.993294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:19:37.001446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:19:37.021671 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:19:37.024452 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:19:37.025074 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 27 03:19:37.025101 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 27 03:19:37.028465 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:19:37.030992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:19:37.032718 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:19:37.034600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:19:37.048059 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:19:37.055954 kernel: loop1: detected capacity change from 0 to 146240 May 27 03:19:37.070816 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:19:37.084953 kernel: loop2: detected capacity change from 0 to 221472 May 27 03:19:37.154970 kernel: loop3: detected capacity change from 0 to 113872 May 27 03:19:37.175979 kernel: loop4: detected capacity change from 0 to 146240 May 27 03:19:37.185952 kernel: loop5: detected capacity change from 0 to 221472 May 27 03:19:37.186674 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:19:37.189447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:19:37.198849 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:19:37.199636 (sd-merge)[1275]: Merged extensions into '/usr'. May 27 03:19:37.205338 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:19:37.205466 systemd[1]: Reloading... May 27 03:19:37.214654 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. May 27 03:19:37.214674 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. May 27 03:19:37.275950 zram_generator::config[1308]: No configuration found. May 27 03:19:37.388172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:19:37.393605 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:19:37.491157 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:19:37.491514 systemd[1]: Reloading finished in 285 ms. May 27 03:19:37.526606 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:19:37.528209 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:19:37.529814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:19:37.547530 systemd[1]: Starting ensure-sysext.service... May 27 03:19:37.549688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:19:37.561642 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... May 27 03:19:37.561660 systemd[1]: Reloading... May 27 03:19:37.575617 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:19:37.575668 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:19:37.576013 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:19:37.576285 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:19:37.577153 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:19:37.577441 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 27 03:19:37.577542 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. May 27 03:19:37.597570 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:19:37.597586 systemd-tmpfiles[1344]: Skipping /boot May 27 03:19:37.613997 zram_generator::config[1371]: No configuration found. May 27 03:19:37.620313 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:19:37.620481 systemd-tmpfiles[1344]: Skipping /boot May 27 03:19:37.721599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:19:37.803605 systemd[1]: Reloading finished in 241 ms. May 27 03:19:37.828629 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:19:37.844843 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:19:37.854495 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:19:37.856995 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:19:37.880196 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:19:37.885110 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:19:37.889055 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:19:37.893160 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:19:37.897732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:37.897928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:37.903800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:19:37.907630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:19:37.911151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:19:37.912815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:37.913027 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:37.916309 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:19:37.917526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:37.920188 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:19:37.922631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:19:37.923289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:19:37.925638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:19:37.926021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:19:37.928277 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:19:37.928557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:19:37.939085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:37.939740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:37.942702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:19:37.953423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:19:37.955698 systemd-udevd[1417]: Using default interface naming scheme 'v255'. May 27 03:19:37.956991 augenrules[1445]: No rules May 27 03:19:37.956575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:19:37.958118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:37.958305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:37.960990 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:19:37.962346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:37.964137 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:19:37.972382 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:19:37.975258 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:19:37.977840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:19:37.978270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:19:37.980606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:19:37.981058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:19:37.983543 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:19:37.987106 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:19:37.992172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:19:37.994474 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:19:37.996303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:19:38.008926 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:19:38.028750 systemd[1]: Finished ensure-sysext.service. May 27 03:19:38.034523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:38.038035 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:19:38.039185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:19:38.042035 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:19:38.047753 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:19:38.050139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:19:38.055880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:19:38.057171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:19:38.057216 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:19:38.060036 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:19:38.067097 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:19:38.069129 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:19:38.069159 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:19:38.069950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:19:38.070309 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:19:38.072750 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:19:38.073008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:19:38.074718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:19:38.074959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:19:38.086040 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:19:38.095688 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:19:38.097982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:19:38.107084 augenrules[1492]: /sbin/augenrules: No change May 27 03:19:38.115248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:19:38.120281 augenrules[1526]: No rules May 27 03:19:38.123582 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:19:38.124387 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:19:38.126889 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:19:38.140315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:19:38.143370 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:19:38.165954 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:19:38.168444 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:19:38.185478 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 03:19:38.188943 kernel: ACPI: button: Power Button [PWRF] May 27 03:19:38.215094 systemd-resolved[1414]: Positive Trust Anchors: May 27 03:19:38.215117 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:19:38.215148 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:19:38.220929 systemd-resolved[1414]: Defaulting to hostname 'linux'. May 27 03:19:38.223891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:19:38.225562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:19:38.230620 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 03:19:38.231077 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:19:38.231282 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:19:38.258492 systemd-networkd[1501]: lo: Link UP May 27 03:19:38.258509 systemd-networkd[1501]: lo: Gained carrier May 27 03:19:38.261363 systemd-networkd[1501]: Enumeration completed May 27 03:19:38.261486 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:19:38.263083 systemd[1]: Reached target network.target - Network. May 27 03:19:38.266257 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:19:38.270148 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:19:38.270163 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:19:38.270952 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:19:38.272573 systemd-networkd[1501]: eth0: Link UP May 27 03:19:38.272796 systemd-networkd[1501]: eth0: Gained carrier May 27 03:19:38.272814 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:19:38.286973 systemd-networkd[1501]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:19:38.301900 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:19:38.304140 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:19:39.702485 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:19:39.702537 systemd-timesyncd[1504]: Initial clock synchronization to Tue 2025-05-27 03:19:39.702389 UTC. May 27 03:19:39.703785 systemd-resolved[1414]: Clock change detected. Flushing caches. May 27 03:19:39.704694 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:19:39.706308 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:19:39.708071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:19:39.710243 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:19:39.712349 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:19:39.714207 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:19:39.714249 systemd[1]: Reached target paths.target - Path Units. May 27 03:19:39.715470 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:19:39.718563 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:19:39.721492 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:19:39.723266 systemd[1]: Reached target timers.target - Timer Units. May 27 03:19:39.726181 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:19:39.730424 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:19:39.739700 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:19:39.743223 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:19:39.744759 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:19:39.791486 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:19:39.794296 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:19:39.798718 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:19:39.810399 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:19:39.813198 systemd[1]: Reached target basic.target - Basic System. May 27 03:19:39.814455 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:19:39.814587 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:19:39.822223 kernel: kvm_amd: TSC scaling supported May 27 03:19:39.822310 kernel: kvm_amd: Nested Virtualization enabled May 27 03:19:39.822353 kernel: kvm_amd: Nested Paging enabled May 27 03:19:39.822367 kernel: kvm_amd: LBR virtualization supported May 27 03:19:39.823027 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:19:39.826012 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:19:39.826057 kernel: kvm_amd: Virtual GIF supported May 27 03:19:39.826215 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:19:39.832343 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:19:39.840295 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:19:39.844261 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:19:39.844367 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:19:39.845959 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:19:39.850476 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:19:39.852377 jq[1564]: false May 27 03:19:39.854032 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:19:39.858439 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:19:39.862515 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:19:39.871701 extend-filesystems[1565]: Found loop3 May 27 03:19:39.871701 extend-filesystems[1565]: Found loop4 May 27 03:19:39.874151 extend-filesystems[1565]: Found loop5 May 27 03:19:39.874151 extend-filesystems[1565]: Found sr0 May 27 03:19:39.874151 extend-filesystems[1565]: Found vda May 27 03:19:39.874151 extend-filesystems[1565]: Found vda1 May 27 03:19:39.874151 extend-filesystems[1565]: Found vda2 May 27 03:19:39.874151 extend-filesystems[1565]: Found vda3 May 27 03:19:39.874151 extend-filesystems[1565]: Found usr May 27 03:19:39.874151 extend-filesystems[1565]: Found vda4 May 27 03:19:39.874151 extend-filesystems[1565]: Found vda6 May 27 03:19:39.874151 extend-filesystems[1565]: Found vda7 May 27 03:19:39.874151 extend-filesystems[1565]: Found vda9 May 27 03:19:39.874151 extend-filesystems[1565]: Checking size of /dev/vda9 May 27 03:19:39.886844 oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 27 03:19:39.878017 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:19:39.889083 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 27 03:19:39.880222 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:19:39.880971 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:19:39.884272 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:19:39.889293 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:19:39.895525 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting May 27 03:19:39.895517 oslogin_cache_refresh[1566]: Failure getting users, quitting May 27 03:19:39.895778 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:19:39.895778 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache May 27 03:19:39.895537 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:19:39.895591 oslogin_cache_refresh[1566]: Refreshing group entry cache May 27 03:19:39.897814 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:19:39.899845 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:19:39.901541 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:19:39.903260 extend-filesystems[1565]: Resized partition /dev/vda9 May 27 03:19:39.905862 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:19:39.907291 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:19:39.908760 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting May 27 03:19:39.908760 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:19:39.907462 oslogin_cache_refresh[1566]: Failure getting groups, quitting May 27 03:19:39.907477 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:19:39.914528 jq[1576]: true May 27 03:19:39.914909 extend-filesystems[1588]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:19:39.912487 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:19:39.912751 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:19:39.925333 update_engine[1574]: I20250527 03:19:39.925257 1574 main.cc:92] Flatcar Update Engine starting May 27 03:19:39.928029 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:19:39.937538 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:19:39.941614 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:19:39.941761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:19:39.946965 tar[1584]: linux-amd64/helm May 27 03:19:39.951339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:19:39.958465 jq[1591]: true May 27 03:19:39.970883 dbus-daemon[1562]: [system] SELinux support is enabled May 27 03:19:40.378455 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:19:40.378547 kernel: EDAC MC: Ver: 3.0.0 May 27 03:19:39.972579 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:19:40.379044 update_engine[1574]: I20250527 03:19:39.991209 1574 update_check_scheduler.cc:74] Next update check in 4m38s May 27 03:19:39.976448 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:19:39.976576 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:19:39.978320 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:19:39.978431 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:19:39.992501 systemd[1]: Started update-engine.service - Update Engine. May 27 03:19:40.019120 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:19:40.145551 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:19:40.173820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:19:40.379145 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:19:40.379183 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:19:40.381151 systemd-logind[1573]: New seat seat0. May 27 03:19:40.381670 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:19:40.381670 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:19:40.381670 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:19:40.393059 extend-filesystems[1565]: Resized filesystem in /dev/vda9 May 27 03:19:40.395737 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:19:40.386718 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:19:40.387143 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:19:40.389199 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:19:40.424964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:19:40.430138 bash[1623]: Updated "/home/core/.ssh/authorized_keys" May 27 03:19:40.428484 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:19:40.432563 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:19:40.435395 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:19:40.446192 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:19:40.446539 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:19:40.451293 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:19:40.472309 containerd[1589]: time="2025-05-27T03:19:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:19:40.473359 containerd[1589]: time="2025-05-27T03:19:40.473316767Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:19:40.483097 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:19:40.483350 containerd[1589]: time="2025-05-27T03:19:40.483308345Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.536µs" May 27 03:19:40.483350 containerd[1589]: time="2025-05-27T03:19:40.483343210Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:19:40.483408 containerd[1589]: time="2025-05-27T03:19:40.483360543Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:19:40.483565 containerd[1589]: time="2025-05-27T03:19:40.483543296Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:19:40.483565 containerd[1589]: time="2025-05-27T03:19:40.483561570Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:19:40.483611 containerd[1589]: time="2025-05-27T03:19:40.483584443Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:19:40.483661 containerd[1589]: time="2025-05-27T03:19:40.483645047Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:19:40.483661 containerd[1589]: time="2025-05-27T03:19:40.483658361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:19:40.483952 containerd[1589]: time="2025-05-27T03:19:40.483926534Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:19:40.483952 containerd[1589]: time="2025-05-27T03:19:40.483944007Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:19:40.484018 containerd[1589]: time="2025-05-27T03:19:40.483953625Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:19:40.484018 containerd[1589]: time="2025-05-27T03:19:40.483961590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:19:40.484116 containerd[1589]: time="2025-05-27T03:19:40.484083208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:19:40.484341 containerd[1589]: time="2025-05-27T03:19:40.484317728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:19:40.484367 containerd[1589]: time="2025-05-27T03:19:40.484350980Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:19:40.484367 containerd[1589]: time="2025-05-27T03:19:40.484360128Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:19:40.484410 containerd[1589]: time="2025-05-27T03:19:40.484378452Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:19:40.484605 containerd[1589]: time="2025-05-27T03:19:40.484587354Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:19:40.484667 containerd[1589]: time="2025-05-27T03:19:40.484653488Z" level=info msg="metadata content store policy set" policy=shared May 27 03:19:40.487498 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:19:40.491392 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:19:40.493063 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:19:40.508004 containerd[1589]: time="2025-05-27T03:19:40.507934368Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:19:40.508193 containerd[1589]: time="2025-05-27T03:19:40.508176392Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:19:40.508251 containerd[1589]: time="2025-05-27T03:19:40.508237937Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:19:40.508301 containerd[1589]: time="2025-05-27T03:19:40.508289363Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:19:40.508351 containerd[1589]: time="2025-05-27T03:19:40.508339838Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:19:40.508396 containerd[1589]: time="2025-05-27T03:19:40.508385343Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:19:40.508466 containerd[1589]: time="2025-05-27T03:19:40.508452610Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:19:40.508516 containerd[1589]: time="2025-05-27T03:19:40.508504657Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:19:40.508563 containerd[1589]: time="2025-05-27T03:19:40.508551966Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:19:40.508608 containerd[1589]: time="2025-05-27T03:19:40.508597742Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:19:40.508653 containerd[1589]: time="2025-05-27T03:19:40.508642506Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:19:40.508712 containerd[1589]: time="2025-05-27T03:19:40.508689925Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:19:40.508943 containerd[1589]: time="2025-05-27T03:19:40.508923773Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:19:40.509043 containerd[1589]: time="2025-05-27T03:19:40.509028980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:19:40.509099 containerd[1589]: time="2025-05-27T03:19:40.509087210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:19:40.509160 containerd[1589]: time="2025-05-27T03:19:40.509148094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:19:40.509207 containerd[1589]: time="2025-05-27T03:19:40.509195763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:19:40.509253 containerd[1589]: time="2025-05-27T03:19:40.509242451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:19:40.509325 containerd[1589]: time="2025-05-27T03:19:40.509311300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:19:40.509384 containerd[1589]: time="2025-05-27T03:19:40.509371543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:19:40.509441 containerd[1589]: time="2025-05-27T03:19:40.509427348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:19:40.509499 containerd[1589]: time="2025-05-27T03:19:40.509485426Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:19:40.509551 containerd[1589]: time="2025-05-27T03:19:40.509539167Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:19:40.509694 containerd[1589]: time="2025-05-27T03:19:40.509677497Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:19:40.509772 containerd[1589]: time="2025-05-27T03:19:40.509754100Z" level=info msg="Start snapshots syncer" May 27 03:19:40.509849 containerd[1589]: time="2025-05-27T03:19:40.509835012Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:19:40.510181 containerd[1589]: time="2025-05-27T03:19:40.510147238Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:19:40.510362 containerd[1589]: time="2025-05-27T03:19:40.510344918Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:19:40.511217 containerd[1589]: time="2025-05-27T03:19:40.511196776Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:19:40.511388 containerd[1589]: time="2025-05-27T03:19:40.511363749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:19:40.511452 containerd[1589]: time="2025-05-27T03:19:40.511439892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:19:40.511502 containerd[1589]: time="2025-05-27T03:19:40.511490917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:19:40.511548 containerd[1589]: time="2025-05-27T03:19:40.511536974Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:19:40.511618 containerd[1589]: time="2025-05-27T03:19:40.511598670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:19:40.511670 containerd[1589]: time="2025-05-27T03:19:40.511659013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:19:40.511717 containerd[1589]: time="2025-05-27T03:19:40.511706241Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:19:40.511791 containerd[1589]: time="2025-05-27T03:19:40.511772586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:19:40.511860 containerd[1589]: time="2025-05-27T03:19:40.511844410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:19:40.511933 containerd[1589]: time="2025-05-27T03:19:40.511905856Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:19:40.512137 containerd[1589]: time="2025-05-27T03:19:40.512117883Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:19:40.512198 containerd[1589]: time="2025-05-27T03:19:40.512183677Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:19:40.512252 containerd[1589]: time="2025-05-27T03:19:40.512239852Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:19:40.512300 containerd[1589]: time="2025-05-27T03:19:40.512287882Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:19:40.512344 containerd[1589]: time="2025-05-27T03:19:40.512333808Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:19:40.512390 containerd[1589]: time="2025-05-27T03:19:40.512378833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:19:40.512445 containerd[1589]: time="2025-05-27T03:19:40.512432433Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:19:40.512500 containerd[1589]: time="2025-05-27T03:19:40.512489370Z" level=info msg="runtime interface created" May 27 03:19:40.512540 containerd[1589]: time="2025-05-27T03:19:40.512530397Z" level=info msg="created NRI interface" May 27 03:19:40.512583 containerd[1589]: time="2025-05-27T03:19:40.512572516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:19:40.512628 containerd[1589]: time="2025-05-27T03:19:40.512618151Z" level=info msg="Connect containerd service" May 27 03:19:40.512688 containerd[1589]: time="2025-05-27T03:19:40.512677132Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:19:40.513625 containerd[1589]: time="2025-05-27T03:19:40.513601916Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:19:40.618638 containerd[1589]: time="2025-05-27T03:19:40.618466876Z" level=info msg="Start subscribing containerd event" May 27 03:19:40.618638 containerd[1589]: time="2025-05-27T03:19:40.618559620Z" level=info msg="Start recovering state" May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618713298Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618717005Z" level=info msg="Start event monitor" May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618766618Z" level=info msg="Start cni network conf syncer for default" May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618774933Z" level=info msg="Start streaming server" May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618786545Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618801693Z" level=info msg="runtime interface starting up..." May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618809288Z" level=info msg="starting plugins..." May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618826781Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:19:40.619042 containerd[1589]: time="2025-05-27T03:19:40.618832141Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:19:40.619467 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:19:40.620857 containerd[1589]: time="2025-05-27T03:19:40.619621952Z" level=info msg="containerd successfully booted in 0.148321s" May 27 03:19:40.757353 tar[1584]: linux-amd64/LICENSE May 27 03:19:40.757495 tar[1584]: linux-amd64/README.md May 27 03:19:40.783887 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:19:41.717190 systemd-networkd[1501]: eth0: Gained IPv6LL May 27 03:19:41.721091 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:19:41.723083 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:19:41.726127 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:19:41.728927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:41.731400 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:19:41.760624 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:19:41.778825 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:19:41.779144 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:19:41.789749 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:19:42.540598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:42.542726 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:19:42.544139 systemd[1]: Startup finished in 3.745s (kernel) + 6.346s (initrd) + 5.173s (userspace) = 15.265s. May 27 03:19:42.552384 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:19:42.976729 kubelet[1700]: E0527 03:19:42.976569 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:19:42.981432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:19:42.981656 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:19:42.982111 systemd[1]: kubelet.service: Consumed 1.031s CPU time, 265.8M memory peak. May 27 03:19:44.357084 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:19:44.358653 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:40708.service - OpenSSH per-connection server daemon (10.0.0.1:40708). May 27 03:19:44.443036 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 40708 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:44.445387 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:44.453247 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:19:44.454483 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:19:44.461191 systemd-logind[1573]: New session 1 of user core. May 27 03:19:44.476826 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:19:44.480115 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:19:44.509596 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:19:44.512744 systemd-logind[1573]: New session c1 of user core. May 27 03:19:44.678438 systemd[1717]: Queued start job for default target default.target. May 27 03:19:44.687471 systemd[1717]: Created slice app.slice - User Application Slice. May 27 03:19:44.687501 systemd[1717]: Reached target paths.target - Paths. May 27 03:19:44.687546 systemd[1717]: Reached target timers.target - Timers. May 27 03:19:44.689336 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:19:44.701375 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:19:44.701506 systemd[1717]: Reached target sockets.target - Sockets. May 27 03:19:44.701553 systemd[1717]: Reached target basic.target - Basic System. May 27 03:19:44.701594 systemd[1717]: Reached target default.target - Main User Target. May 27 03:19:44.701632 systemd[1717]: Startup finished in 181ms. May 27 03:19:44.702380 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:19:44.704558 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:19:44.769610 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:40722.service - OpenSSH per-connection server daemon (10.0.0.1:40722). May 27 03:19:44.829690 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 40722 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:44.831193 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:44.836112 systemd-logind[1573]: New session 2 of user core. May 27 03:19:44.851216 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:19:44.905582 sshd[1730]: Connection closed by 10.0.0.1 port 40722 May 27 03:19:44.905937 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 27 03:19:44.922908 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:40722.service: Deactivated successfully. May 27 03:19:44.925264 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:19:44.926105 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. May 27 03:19:44.929471 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:40734.service - OpenSSH per-connection server daemon (10.0.0.1:40734). May 27 03:19:44.930210 systemd-logind[1573]: Removed session 2. May 27 03:19:44.984695 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 40734 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:44.986456 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:44.991564 systemd-logind[1573]: New session 3 of user core. May 27 03:19:45.006133 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:19:45.055640 sshd[1738]: Connection closed by 10.0.0.1 port 40734 May 27 03:19:45.056025 sshd-session[1736]: pam_unix(sshd:session): session closed for user core May 27 03:19:45.072331 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:40734.service: Deactivated successfully. May 27 03:19:45.074168 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:19:45.074888 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. May 27 03:19:45.077779 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:40750.service - OpenSSH per-connection server daemon (10.0.0.1:40750). May 27 03:19:45.078585 systemd-logind[1573]: Removed session 3. May 27 03:19:45.145608 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 40750 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:45.147607 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:45.152524 systemd-logind[1573]: New session 4 of user core. May 27 03:19:45.162154 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:19:45.216971 sshd[1746]: Connection closed by 10.0.0.1 port 40750 May 27 03:19:45.217485 sshd-session[1744]: pam_unix(sshd:session): session closed for user core May 27 03:19:45.233713 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:40750.service: Deactivated successfully. May 27 03:19:45.235473 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:19:45.236206 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. May 27 03:19:45.238840 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:40756.service - OpenSSH per-connection server daemon (10.0.0.1:40756). May 27 03:19:45.239438 systemd-logind[1573]: Removed session 4. May 27 03:19:45.288792 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 40756 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:45.290533 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:45.295110 systemd-logind[1573]: New session 5 of user core. May 27 03:19:45.304147 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:19:45.363903 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:19:45.364346 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:45.387220 sudo[1755]: pam_unix(sudo:session): session closed for user root May 27 03:19:45.389015 sshd[1754]: Connection closed by 10.0.0.1 port 40756 May 27 03:19:45.389419 sshd-session[1752]: pam_unix(sshd:session): session closed for user core May 27 03:19:45.402648 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:40756.service: Deactivated successfully. May 27 03:19:45.404321 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:19:45.405166 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. May 27 03:19:45.412858 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:40762.service - OpenSSH per-connection server daemon (10.0.0.1:40762). May 27 03:19:45.413688 systemd-logind[1573]: Removed session 5. May 27 03:19:45.462769 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 40762 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:45.464202 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:45.468843 systemd-logind[1573]: New session 6 of user core. May 27 03:19:45.478126 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:19:45.533442 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:19:45.533765 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:45.686336 sudo[1766]: pam_unix(sudo:session): session closed for user root May 27 03:19:45.694340 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:19:45.694704 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:45.705879 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:19:45.757320 augenrules[1788]: No rules May 27 03:19:45.759411 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:19:45.759728 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:19:45.760946 sudo[1765]: pam_unix(sudo:session): session closed for user root May 27 03:19:45.762584 sshd[1764]: Connection closed by 10.0.0.1 port 40762 May 27 03:19:45.762955 sshd-session[1761]: pam_unix(sshd:session): session closed for user core May 27 03:19:45.775881 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:40762.service: Deactivated successfully. May 27 03:19:45.777960 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:19:45.778795 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. May 27 03:19:45.781796 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:40772.service - OpenSSH per-connection server daemon (10.0.0.1:40772). May 27 03:19:45.782650 systemd-logind[1573]: Removed session 6. May 27 03:19:45.847301 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 40772 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:19:45.848792 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:19:45.853674 systemd-logind[1573]: New session 7 of user core. May 27 03:19:45.863188 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:19:45.917472 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:19:45.917831 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:19:46.245727 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:19:46.267324 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:19:46.481491 dockerd[1821]: time="2025-05-27T03:19:46.481410077Z" level=info msg="Starting up" May 27 03:19:46.482852 dockerd[1821]: time="2025-05-27T03:19:46.482820923Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:19:46.885181 dockerd[1821]: time="2025-05-27T03:19:46.885106542Z" level=info msg="Loading containers: start." May 27 03:19:46.896018 kernel: Initializing XFRM netlink socket May 27 03:19:47.177795 systemd-networkd[1501]: docker0: Link UP May 27 03:19:47.185173 dockerd[1821]: time="2025-05-27T03:19:47.185091000Z" level=info msg="Loading containers: done." May 27 03:19:47.203003 dockerd[1821]: time="2025-05-27T03:19:47.202912612Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:19:47.203182 dockerd[1821]: time="2025-05-27T03:19:47.203075046Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:19:47.203261 dockerd[1821]: time="2025-05-27T03:19:47.203231650Z" level=info msg="Initializing buildkit" May 27 03:19:47.262249 dockerd[1821]: time="2025-05-27T03:19:47.262180670Z" level=info msg="Completed buildkit initialization" May 27 03:19:47.266766 dockerd[1821]: time="2025-05-27T03:19:47.266699615Z" level=info msg="Daemon has completed initialization" May 27 03:19:47.266970 dockerd[1821]: time="2025-05-27T03:19:47.266804782Z" level=info msg="API listen on /run/docker.sock" May 27 03:19:47.267182 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:19:47.994007 containerd[1589]: time="2025-05-27T03:19:47.993938087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 03:19:48.662619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363605157.mount: Deactivated successfully. May 27 03:19:49.544181 containerd[1589]: time="2025-05-27T03:19:49.544097569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:49.545089 containerd[1589]: time="2025-05-27T03:19:49.545050576Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 27 03:19:49.546553 containerd[1589]: time="2025-05-27T03:19:49.546506968Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:49.549688 containerd[1589]: time="2025-05-27T03:19:49.549618663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:49.551010 containerd[1589]: time="2025-05-27T03:19:49.550947445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.556965886s" May 27 03:19:49.551073 containerd[1589]: time="2025-05-27T03:19:49.551008299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 27 03:19:49.551566 containerd[1589]: time="2025-05-27T03:19:49.551539045Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 03:19:50.714374 containerd[1589]: time="2025-05-27T03:19:50.714294025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:50.714955 containerd[1589]: time="2025-05-27T03:19:50.714895834Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 27 03:19:50.717943 containerd[1589]: time="2025-05-27T03:19:50.717887033Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:50.720351 containerd[1589]: time="2025-05-27T03:19:50.720307543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:50.721349 containerd[1589]: time="2025-05-27T03:19:50.721315864Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.169751982s" May 27 03:19:50.721349 containerd[1589]: time="2025-05-27T03:19:50.721348254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 27 03:19:50.721797 containerd[1589]: time="2025-05-27T03:19:50.721766118Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 03:19:52.151082 containerd[1589]: time="2025-05-27T03:19:52.151020089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:52.152231 containerd[1589]: time="2025-05-27T03:19:52.152198580Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 27 03:19:52.153603 containerd[1589]: time="2025-05-27T03:19:52.153576894Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:52.157052 containerd[1589]: time="2025-05-27T03:19:52.156970539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:52.158036 containerd[1589]: time="2025-05-27T03:19:52.157993106Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.436180932s" May 27 03:19:52.158036 containerd[1589]: time="2025-05-27T03:19:52.158033522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 27 03:19:52.158627 containerd[1589]: time="2025-05-27T03:19:52.158541214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 03:19:53.112049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291274912.mount: Deactivated successfully. May 27 03:19:53.113201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:19:53.114605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:19:53.445072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:19:53.449862 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:19:53.560864 kubelet[2108]: E0527 03:19:53.560778 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:19:53.567926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:19:53.568158 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:19:53.568545 systemd[1]: kubelet.service: Consumed 235ms CPU time, 109.9M memory peak. May 27 03:19:54.448253 containerd[1589]: time="2025-05-27T03:19:54.448172490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:54.449173 containerd[1589]: time="2025-05-27T03:19:54.449119897Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 27 03:19:54.450481 containerd[1589]: time="2025-05-27T03:19:54.450437328Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:54.452599 containerd[1589]: time="2025-05-27T03:19:54.452554428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:54.453126 containerd[1589]: time="2025-05-27T03:19:54.453079312Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 2.294509234s" May 27 03:19:54.453126 containerd[1589]: time="2025-05-27T03:19:54.453124297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 27 03:19:54.453645 containerd[1589]: time="2025-05-27T03:19:54.453612161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:19:55.011781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215625769.mount: Deactivated successfully. May 27 03:19:55.845201 containerd[1589]: time="2025-05-27T03:19:55.845132589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:55.846569 containerd[1589]: time="2025-05-27T03:19:55.846529188Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 03:19:55.847997 containerd[1589]: time="2025-05-27T03:19:55.847944362Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:55.851370 containerd[1589]: time="2025-05-27T03:19:55.851334510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:55.852330 containerd[1589]: time="2025-05-27T03:19:55.852275745Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.398629359s" May 27 03:19:55.852330 containerd[1589]: time="2025-05-27T03:19:55.852323004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:19:55.853003 containerd[1589]: time="2025-05-27T03:19:55.852946613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:19:56.618353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487729839.mount: Deactivated successfully. May 27 03:19:56.693012 containerd[1589]: time="2025-05-27T03:19:56.692911798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:19:56.693905 containerd[1589]: time="2025-05-27T03:19:56.693828256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:19:56.697848 containerd[1589]: time="2025-05-27T03:19:56.697795456Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:19:56.701959 containerd[1589]: time="2025-05-27T03:19:56.701876580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:19:56.702953 containerd[1589]: time="2025-05-27T03:19:56.702748124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 849.770032ms" May 27 03:19:56.702953 containerd[1589]: time="2025-05-27T03:19:56.702790504Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:19:56.703488 containerd[1589]: time="2025-05-27T03:19:56.703428119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 03:19:57.260488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626143486.mount: Deactivated successfully. May 27 03:19:59.702652 containerd[1589]: time="2025-05-27T03:19:59.702543992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:59.732808 containerd[1589]: time="2025-05-27T03:19:59.732758605Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 27 03:19:59.748032 containerd[1589]: time="2025-05-27T03:19:59.747967087Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:59.767192 containerd[1589]: time="2025-05-27T03:19:59.767089668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:19:59.767955 containerd[1589]: time="2025-05-27T03:19:59.767906120Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.064421364s" May 27 03:19:59.767955 containerd[1589]: time="2025-05-27T03:19:59.767950002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 27 03:20:01.701488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:20:01.701728 systemd[1]: kubelet.service: Consumed 235ms CPU time, 109.9M memory peak. May 27 03:20:01.704661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:20:01.733092 systemd[1]: Reload requested from client PID 2260 ('systemctl') (unit session-7.scope)... May 27 03:20:01.733108 systemd[1]: Reloading... May 27 03:20:01.847030 zram_generator::config[2305]: No configuration found. May 27 03:20:01.950086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:20:02.077364 systemd[1]: Reloading finished in 343 ms. May 27 03:20:02.160145 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:20:02.160247 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:20:02.160572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:20:02.160638 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.4M memory peak. May 27 03:20:02.162549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:20:02.380044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:20:02.398413 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:20:02.435782 kubelet[2350]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:20:02.435782 kubelet[2350]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 03:20:02.435782 kubelet[2350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:20:02.436232 kubelet[2350]: I0527 03:20:02.435815 2350 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:20:02.748333 kubelet[2350]: I0527 03:20:02.748148 2350 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 03:20:02.748333 kubelet[2350]: I0527 03:20:02.748181 2350 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:20:02.748555 kubelet[2350]: I0527 03:20:02.748415 2350 server.go:934] "Client rotation is on, will bootstrap in background" May 27 03:20:02.775223 kubelet[2350]: E0527 03:20:02.775167 2350 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:02.775850 kubelet[2350]: I0527 03:20:02.775816 2350 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:20:02.784474 kubelet[2350]: I0527 03:20:02.784439 2350 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:20:02.791180 kubelet[2350]: I0527 03:20:02.791114 2350 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:20:02.791418 kubelet[2350]: I0527 03:20:02.791285 2350 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 03:20:02.791469 kubelet[2350]: I0527 03:20:02.791442 2350 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:20:02.791666 kubelet[2350]: I0527 03:20:02.791470 2350 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:20:02.791666 kubelet[2350]: I0527 03:20:02.791668 2350 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:20:02.792064 kubelet[2350]: I0527 03:20:02.791677 2350 container_manager_linux.go:300] "Creating device plugin manager" May 27 03:20:02.792064 kubelet[2350]: I0527 03:20:02.791820 2350 state_mem.go:36] "Initialized new in-memory state store" May 27 03:20:02.793837 kubelet[2350]: I0527 03:20:02.793750 2350 kubelet.go:408] "Attempting to sync node with API server" May 27 03:20:02.793837 kubelet[2350]: I0527 03:20:02.793774 2350 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:20:02.793837 kubelet[2350]: I0527 03:20:02.793814 2350 kubelet.go:314] "Adding apiserver pod source" May 27 03:20:02.793837 kubelet[2350]: I0527 03:20:02.793841 2350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:20:02.797233 kubelet[2350]: I0527 03:20:02.797196 2350 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:20:02.797671 kubelet[2350]: I0527 03:20:02.797583 2350 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:20:02.797722 kubelet[2350]: W0527 03:20:02.797669 2350 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:20:02.798006 kubelet[2350]: W0527 03:20:02.797914 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused May 27 03:20:02.798065 kubelet[2350]: E0527 03:20:02.798028 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:02.798338 kubelet[2350]: W0527 03:20:02.798247 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused May 27 03:20:02.798338 kubelet[2350]: E0527 03:20:02.798341 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:02.800193 kubelet[2350]: I0527 03:20:02.800013 2350 server.go:1274] "Started kubelet" May 27 03:20:02.800845 kubelet[2350]: I0527 03:20:02.800472 2350 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:20:02.802039 kubelet[2350]: I0527 03:20:02.801382 2350 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:20:02.802039 kubelet[2350]: I0527 03:20:02.801460 2350 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:20:02.802039 kubelet[2350]: I0527 03:20:02.801692 2350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:20:02.802591 kubelet[2350]: I0527 03:20:02.802551 2350 server.go:449] "Adding debug handlers to kubelet server" May 27 03:20:02.804438 kubelet[2350]: I0527 03:20:02.804401 2350 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 03:20:02.804711 kubelet[2350]: E0527 03:20:02.802965 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184344243e0599a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:20:02.799950245 +0000 UTC m=+0.397123770,LastTimestamp:2025-05-27 03:20:02.799950245 +0000 UTC m=+0.397123770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:20:02.804835 kubelet[2350]: I0527 03:20:02.804803 2350 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:20:02.805789 kubelet[2350]: E0527 03:20:02.805769 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:02.806389 kubelet[2350]: E0527 03:20:02.806363 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" May 27 03:20:02.806719 kubelet[2350]: I0527 03:20:02.806700 2350 factory.go:221] Registration of the systemd container factory successfully May 27 03:20:02.806864 kubelet[2350]: I0527 03:20:02.806846 2350 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:20:02.808784 kubelet[2350]: W0527 03:20:02.808750 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused May 27 03:20:02.809140 kubelet[2350]: E0527 03:20:02.809119 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:02.809875 kubelet[2350]: I0527 03:20:02.809844 2350 factory.go:221] Registration of the containerd container factory successfully May 27 03:20:02.810045 kubelet[2350]: I0527 03:20:02.809939 2350 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 03:20:02.810086 kubelet[2350]: I0527 03:20:02.810054 2350 reconciler.go:26] "Reconciler: start to sync state" May 27 03:20:02.810232 kubelet[2350]: E0527 03:20:02.810178 2350 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:20:02.824961 kubelet[2350]: I0527 03:20:02.824849 2350 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 03:20:02.824961 kubelet[2350]: I0527 03:20:02.824898 2350 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 03:20:02.824961 kubelet[2350]: I0527 03:20:02.824918 2350 state_mem.go:36] "Initialized new in-memory state store" May 27 03:20:02.827515 kubelet[2350]: I0527 03:20:02.827477 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:20:02.829164 kubelet[2350]: I0527 03:20:02.829109 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:20:02.829164 kubelet[2350]: I0527 03:20:02.829163 2350 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 03:20:02.829238 kubelet[2350]: I0527 03:20:02.829193 2350 kubelet.go:2321] "Starting kubelet main sync loop" May 27 03:20:02.829300 kubelet[2350]: E0527 03:20:02.829250 2350 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:20:02.907012 kubelet[2350]: E0527 03:20:02.906939 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:02.929738 kubelet[2350]: E0527 03:20:02.929693 2350 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:20:03.007286 kubelet[2350]: E0527 03:20:03.007153 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.007504 kubelet[2350]: E0527 03:20:03.007460 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" May 27 03:20:03.108215 kubelet[2350]: E0527 03:20:03.108141 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.130711 kubelet[2350]: E0527 03:20:03.130644 2350 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:20:03.209068 kubelet[2350]: E0527 03:20:03.208973 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.309954 kubelet[2350]: E0527 03:20:03.309780 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.408692 kubelet[2350]: E0527 03:20:03.408624 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" May 27 03:20:03.410839 kubelet[2350]: E0527 03:20:03.410781 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.511608 kubelet[2350]: E0527 03:20:03.511521 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.530932 kubelet[2350]: E0527 03:20:03.530853 2350 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:20:03.612508 kubelet[2350]: E0527 03:20:03.612347 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.713188 kubelet[2350]: E0527 03:20:03.713108 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.814100 kubelet[2350]: E0527 03:20:03.814030 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.915099 kubelet[2350]: E0527 03:20:03.914893 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:03.917477 kubelet[2350]: W0527 03:20:03.917425 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused May 27 03:20:03.917530 kubelet[2350]: E0527 03:20:03.917489 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:03.922645 kubelet[2350]: W0527 03:20:03.922556 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused May 27 03:20:03.922732 kubelet[2350]: E0527 03:20:03.922664 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:03.936411 kubelet[2350]: W0527 03:20:03.936367 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused May 27 03:20:03.936411 kubelet[2350]: E0527 03:20:03.936405 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:03.963854 kubelet[2350]: I0527 03:20:03.963808 2350 policy_none.go:49] "None policy: Start" May 27 03:20:03.964346 kubelet[2350]: W0527 03:20:03.964243 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused May 27 03:20:03.964346 kubelet[2350]: E0527 03:20:03.964305 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:03.964673 kubelet[2350]: I0527 03:20:03.964574 2350 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 03:20:03.964673 kubelet[2350]: I0527 03:20:03.964614 2350 state_mem.go:35] "Initializing new in-memory state store" May 27 03:20:03.997120 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:20:04.012573 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:20:04.015313 kubelet[2350]: E0527 03:20:04.015273 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:04.016083 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:20:04.031080 kubelet[2350]: I0527 03:20:04.031049 2350 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:20:04.031336 kubelet[2350]: I0527 03:20:04.031303 2350 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:20:04.031409 kubelet[2350]: I0527 03:20:04.031319 2350 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:20:04.031662 kubelet[2350]: I0527 03:20:04.031574 2350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:20:04.033389 kubelet[2350]: E0527 03:20:04.033368 2350 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:20:04.133728 kubelet[2350]: I0527 03:20:04.133696 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 03:20:04.134246 kubelet[2350]: E0527 03:20:04.134186 2350 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" May 27 03:20:04.210226 kubelet[2350]: E0527 03:20:04.210058 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" May 27 03:20:04.335659 kubelet[2350]: I0527 03:20:04.335239 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 03:20:04.335659 kubelet[2350]: E0527 03:20:04.335615 2350 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" May 27 03:20:04.341961 systemd[1]: Created slice kubepods-burstable-pod6dd1830d4a3db8e25bfd5a98c09ab949.slice - libcontainer container kubepods-burstable-pod6dd1830d4a3db8e25bfd5a98c09ab949.slice. May 27 03:20:04.383160 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 27 03:20:04.398857 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 27 03:20:04.420440 kubelet[2350]: I0527 03:20:04.420377 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dd1830d4a3db8e25bfd5a98c09ab949-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6dd1830d4a3db8e25bfd5a98c09ab949\") " pod="kube-system/kube-apiserver-localhost" May 27 03:20:04.420440 kubelet[2350]: I0527 03:20:04.420436 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:04.420600 kubelet[2350]: I0527 03:20:04.420459 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:04.420600 kubelet[2350]: I0527 03:20:04.420545 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 27 03:20:04.420600 kubelet[2350]: I0527 03:20:04.420582 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dd1830d4a3db8e25bfd5a98c09ab949-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dd1830d4a3db8e25bfd5a98c09ab949\") " pod="kube-system/kube-apiserver-localhost" May 27 03:20:04.420677 kubelet[2350]: I0527 03:20:04.420607 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dd1830d4a3db8e25bfd5a98c09ab949-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dd1830d4a3db8e25bfd5a98c09ab949\") " pod="kube-system/kube-apiserver-localhost" May 27 03:20:04.420677 kubelet[2350]: I0527 03:20:04.420634 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:04.420677 kubelet[2350]: I0527 03:20:04.420662 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:04.420796 kubelet[2350]: I0527 03:20:04.420713 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:04.681028 containerd[1589]: time="2025-05-27T03:20:04.680842860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6dd1830d4a3db8e25bfd5a98c09ab949,Namespace:kube-system,Attempt:0,}" May 27 03:20:04.696765 containerd[1589]: time="2025-05-27T03:20:04.696722700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 27 03:20:04.702334 containerd[1589]: time="2025-05-27T03:20:04.702267739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 27 03:20:04.737634 kubelet[2350]: I0527 03:20:04.737599 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 03:20:04.738101 kubelet[2350]: E0527 03:20:04.737944 2350 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" May 27 03:20:04.841464 kubelet[2350]: E0527 03:20:04.841135 2350 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" May 27 03:20:04.854698 containerd[1589]: time="2025-05-27T03:20:04.854559450Z" level=info msg="connecting to shim 25603329f6829e0f7da6d27a4602ebea43f0c216410e44384eface350c615e68" address="unix:///run/containerd/s/6cc3ddfe6652cd23b30b0a2debe3ead37d005d1da10bc101573802d9b91d0c37" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:04.864317 containerd[1589]: time="2025-05-27T03:20:04.864251586Z" level=info msg="connecting to shim 5fe7a3b44e09db99c6e1d0fb5b76d01c912ca215eff92c708f16f53df16d7650" address="unix:///run/containerd/s/1a98d8754762babff03cb4bfafde11ef4f6a0700988537b3ac71ca583a0c1745" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:04.881900 containerd[1589]: time="2025-05-27T03:20:04.881835142Z" level=info msg="connecting to shim 7d908cb20440af0d7dce81c9e139f8bb03cb3bbf6626f2b6500eec091fc6c4d9" address="unix:///run/containerd/s/a0eb8329029c2fbc6132b76a7233e5230fa4b8bfe63b3784b5b3c4b1bf093d12" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:04.891242 systemd[1]: Started cri-containerd-25603329f6829e0f7da6d27a4602ebea43f0c216410e44384eface350c615e68.scope - libcontainer container 25603329f6829e0f7da6d27a4602ebea43f0c216410e44384eface350c615e68. May 27 03:20:04.895962 systemd[1]: Started cri-containerd-5fe7a3b44e09db99c6e1d0fb5b76d01c912ca215eff92c708f16f53df16d7650.scope - libcontainer container 5fe7a3b44e09db99c6e1d0fb5b76d01c912ca215eff92c708f16f53df16d7650. May 27 03:20:04.911171 systemd[1]: Started cri-containerd-7d908cb20440af0d7dce81c9e139f8bb03cb3bbf6626f2b6500eec091fc6c4d9.scope - libcontainer container 7d908cb20440af0d7dce81c9e139f8bb03cb3bbf6626f2b6500eec091fc6c4d9. May 27 03:20:04.955696 containerd[1589]: time="2025-05-27T03:20:04.955433024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6dd1830d4a3db8e25bfd5a98c09ab949,Namespace:kube-system,Attempt:0,} returns sandbox id \"25603329f6829e0f7da6d27a4602ebea43f0c216410e44384eface350c615e68\"" May 27 03:20:04.960142 containerd[1589]: time="2025-05-27T03:20:04.960089246Z" level=info msg="CreateContainer within sandbox \"25603329f6829e0f7da6d27a4602ebea43f0c216410e44384eface350c615e68\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:20:04.964578 containerd[1589]: time="2025-05-27T03:20:04.964533691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fe7a3b44e09db99c6e1d0fb5b76d01c912ca215eff92c708f16f53df16d7650\"" May 27 03:20:04.967949 containerd[1589]: time="2025-05-27T03:20:04.967911145Z" level=info msg="CreateContainer within sandbox \"5fe7a3b44e09db99c6e1d0fb5b76d01c912ca215eff92c708f16f53df16d7650\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:20:04.975400 containerd[1589]: time="2025-05-27T03:20:04.975348733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d908cb20440af0d7dce81c9e139f8bb03cb3bbf6626f2b6500eec091fc6c4d9\"" May 27 03:20:04.977424 containerd[1589]: time="2025-05-27T03:20:04.977389961Z" level=info msg="CreateContainer within sandbox \"7d908cb20440af0d7dce81c9e139f8bb03cb3bbf6626f2b6500eec091fc6c4d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:20:04.996927 containerd[1589]: time="2025-05-27T03:20:04.996848042Z" level=info msg="Container 59cb2ba9614dccf789f40c3951336ee4e8c31ef0ff01ad44726bf462e8f44643: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:05.001303 containerd[1589]: time="2025-05-27T03:20:05.001263102Z" level=info msg="Container 52b5bd97fa97e066445bfac43f51f70e5177e1ee7933af46b789de390bb2909d: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:05.032362 containerd[1589]: time="2025-05-27T03:20:05.032278587Z" level=info msg="Container 0d1319a6b1d04ae7e75681539163804c9b020d768d91f58063e8d928891987d3: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:05.035367 containerd[1589]: time="2025-05-27T03:20:05.035320973Z" level=info msg="CreateContainer within sandbox \"5fe7a3b44e09db99c6e1d0fb5b76d01c912ca215eff92c708f16f53df16d7650\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52b5bd97fa97e066445bfac43f51f70e5177e1ee7933af46b789de390bb2909d\"" May 27 03:20:05.036129 containerd[1589]: time="2025-05-27T03:20:05.036072402Z" level=info msg="StartContainer for \"52b5bd97fa97e066445bfac43f51f70e5177e1ee7933af46b789de390bb2909d\"" May 27 03:20:05.037463 containerd[1589]: time="2025-05-27T03:20:05.037436981Z" level=info msg="connecting to shim 52b5bd97fa97e066445bfac43f51f70e5177e1ee7933af46b789de390bb2909d" address="unix:///run/containerd/s/1a98d8754762babff03cb4bfafde11ef4f6a0700988537b3ac71ca583a0c1745" protocol=ttrpc version=3 May 27 03:20:05.045356 containerd[1589]: time="2025-05-27T03:20:05.045321036Z" level=info msg="CreateContainer within sandbox \"25603329f6829e0f7da6d27a4602ebea43f0c216410e44384eface350c615e68\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"59cb2ba9614dccf789f40c3951336ee4e8c31ef0ff01ad44726bf462e8f44643\"" May 27 03:20:05.046038 containerd[1589]: time="2025-05-27T03:20:05.045974111Z" level=info msg="StartContainer for \"59cb2ba9614dccf789f40c3951336ee4e8c31ef0ff01ad44726bf462e8f44643\"" May 27 03:20:05.048799 containerd[1589]: time="2025-05-27T03:20:05.048768431Z" level=info msg="connecting to shim 59cb2ba9614dccf789f40c3951336ee4e8c31ef0ff01ad44726bf462e8f44643" address="unix:///run/containerd/s/6cc3ddfe6652cd23b30b0a2debe3ead37d005d1da10bc101573802d9b91d0c37" protocol=ttrpc version=3 May 27 03:20:05.057758 containerd[1589]: time="2025-05-27T03:20:05.057709720Z" level=info msg="CreateContainer within sandbox \"7d908cb20440af0d7dce81c9e139f8bb03cb3bbf6626f2b6500eec091fc6c4d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d1319a6b1d04ae7e75681539163804c9b020d768d91f58063e8d928891987d3\"" May 27 03:20:05.058871 containerd[1589]: time="2025-05-27T03:20:05.058539516Z" level=info msg="StartContainer for \"0d1319a6b1d04ae7e75681539163804c9b020d768d91f58063e8d928891987d3\"" May 27 03:20:05.060079 containerd[1589]: time="2025-05-27T03:20:05.060052293Z" level=info msg="connecting to shim 0d1319a6b1d04ae7e75681539163804c9b020d768d91f58063e8d928891987d3" address="unix:///run/containerd/s/a0eb8329029c2fbc6132b76a7233e5230fa4b8bfe63b3784b5b3c4b1bf093d12" protocol=ttrpc version=3 May 27 03:20:05.061209 systemd[1]: Started cri-containerd-52b5bd97fa97e066445bfac43f51f70e5177e1ee7933af46b789de390bb2909d.scope - libcontainer container 52b5bd97fa97e066445bfac43f51f70e5177e1ee7933af46b789de390bb2909d. May 27 03:20:05.075231 systemd[1]: Started cri-containerd-59cb2ba9614dccf789f40c3951336ee4e8c31ef0ff01ad44726bf462e8f44643.scope - libcontainer container 59cb2ba9614dccf789f40c3951336ee4e8c31ef0ff01ad44726bf462e8f44643. May 27 03:20:05.094450 systemd[1]: Started cri-containerd-0d1319a6b1d04ae7e75681539163804c9b020d768d91f58063e8d928891987d3.scope - libcontainer container 0d1319a6b1d04ae7e75681539163804c9b020d768d91f58063e8d928891987d3. May 27 03:20:05.162404 containerd[1589]: time="2025-05-27T03:20:05.162363232Z" level=info msg="StartContainer for \"52b5bd97fa97e066445bfac43f51f70e5177e1ee7933af46b789de390bb2909d\" returns successfully" May 27 03:20:05.164043 containerd[1589]: time="2025-05-27T03:20:05.164020731Z" level=info msg="StartContainer for \"0d1319a6b1d04ae7e75681539163804c9b020d768d91f58063e8d928891987d3\" returns successfully" May 27 03:20:05.164393 containerd[1589]: time="2025-05-27T03:20:05.164186251Z" level=info msg="StartContainer for \"59cb2ba9614dccf789f40c3951336ee4e8c31ef0ff01ad44726bf462e8f44643\" returns successfully" May 27 03:20:05.540001 kubelet[2350]: I0527 03:20:05.539873 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 03:20:06.580450 kubelet[2350]: E0527 03:20:06.580389 2350 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:20:06.690043 kubelet[2350]: I0527 03:20:06.689961 2350 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 27 03:20:06.690043 kubelet[2350]: E0527 03:20:06.690026 2350 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:20:06.855677 kubelet[2350]: E0527 03:20:06.855516 2350 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:20:07.799812 kubelet[2350]: I0527 03:20:07.799762 2350 apiserver.go:52] "Watching apiserver" May 27 03:20:07.811748 kubelet[2350]: I0527 03:20:07.811667 2350 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 03:20:11.358414 systemd[1]: Reload requested from client PID 2627 ('systemctl') (unit session-7.scope)... May 27 03:20:11.358434 systemd[1]: Reloading... May 27 03:20:11.442202 zram_generator::config[2669]: No configuration found. May 27 03:20:11.562677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:20:11.699437 systemd[1]: Reloading finished in 340 ms. May 27 03:20:11.739872 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:20:11.763526 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:20:11.763839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:20:11.763895 systemd[1]: kubelet.service: Consumed 996ms CPU time, 131.5M memory peak. May 27 03:20:11.765890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:20:12.001280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:20:12.023382 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:20:12.066929 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:20:12.066929 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 03:20:12.066929 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:20:12.066929 kubelet[2715]: I0527 03:20:12.066365 2715 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:20:12.072806 kubelet[2715]: I0527 03:20:12.072770 2715 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 03:20:12.072806 kubelet[2715]: I0527 03:20:12.072791 2715 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:20:12.073047 kubelet[2715]: I0527 03:20:12.073024 2715 server.go:934] "Client rotation is on, will bootstrap in background" May 27 03:20:12.074140 kubelet[2715]: I0527 03:20:12.074117 2715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:20:12.075848 kubelet[2715]: I0527 03:20:12.075821 2715 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:20:12.083406 kubelet[2715]: I0527 03:20:12.083312 2715 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:20:12.087748 kubelet[2715]: I0527 03:20:12.087719 2715 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:20:12.087859 kubelet[2715]: I0527 03:20:12.087833 2715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 03:20:12.087971 kubelet[2715]: I0527 03:20:12.087947 2715 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:20:12.088142 kubelet[2715]: I0527 03:20:12.087966 2715 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:20:12.088236 kubelet[2715]: I0527 03:20:12.088148 2715 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:20:12.088236 kubelet[2715]: I0527 03:20:12.088157 2715 container_manager_linux.go:300] "Creating device plugin manager" May 27 03:20:12.088236 kubelet[2715]: I0527 03:20:12.088181 2715 state_mem.go:36] "Initialized new in-memory state store" May 27 03:20:12.088333 kubelet[2715]: I0527 03:20:12.088274 2715 kubelet.go:408] "Attempting to sync node with API server" May 27 03:20:12.088333 kubelet[2715]: I0527 03:20:12.088284 2715 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:20:12.088333 kubelet[2715]: I0527 03:20:12.088312 2715 kubelet.go:314] "Adding apiserver pod source" May 27 03:20:12.088333 kubelet[2715]: I0527 03:20:12.088321 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:20:12.090147 kubelet[2715]: I0527 03:20:12.089119 2715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:20:12.090147 kubelet[2715]: I0527 03:20:12.089475 2715 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:20:12.090147 kubelet[2715]: I0527 03:20:12.089844 2715 server.go:1274] "Started kubelet" May 27 03:20:12.091741 kubelet[2715]: I0527 03:20:12.091687 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:20:12.093827 kubelet[2715]: I0527 03:20:12.093752 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:20:12.095181 kubelet[2715]: I0527 03:20:12.095110 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:20:12.095498 kubelet[2715]: I0527 03:20:12.095464 2715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:20:12.095560 kubelet[2715]: I0527 03:20:12.095130 2715 server.go:449] "Adding debug handlers to kubelet server" May 27 03:20:12.096092 kubelet[2715]: I0527 03:20:12.096074 2715 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 03:20:12.096352 kubelet[2715]: E0527 03:20:12.096292 2715 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:20:12.096661 kubelet[2715]: I0527 03:20:12.096615 2715 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 03:20:12.096816 kubelet[2715]: I0527 03:20:12.096769 2715 reconciler.go:26] "Reconciler: start to sync state" May 27 03:20:12.097815 kubelet[2715]: I0527 03:20:12.097794 2715 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:20:12.107517 kubelet[2715]: I0527 03:20:12.107473 2715 factory.go:221] Registration of the systemd container factory successfully May 27 03:20:12.107669 kubelet[2715]: I0527 03:20:12.107622 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:20:12.113604 kubelet[2715]: I0527 03:20:12.113572 2715 factory.go:221] Registration of the containerd container factory successfully May 27 03:20:12.116276 kubelet[2715]: E0527 03:20:12.116236 2715 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:20:12.119745 kubelet[2715]: I0527 03:20:12.119699 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:20:12.121076 kubelet[2715]: I0527 03:20:12.121057 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:20:12.121168 kubelet[2715]: I0527 03:20:12.121154 2715 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 03:20:12.121261 kubelet[2715]: I0527 03:20:12.121246 2715 kubelet.go:2321] "Starting kubelet main sync loop" May 27 03:20:12.121408 kubelet[2715]: E0527 03:20:12.121369 2715 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:20:12.154538 kubelet[2715]: I0527 03:20:12.154505 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 03:20:12.155042 kubelet[2715]: I0527 03:20:12.154716 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 03:20:12.155042 kubelet[2715]: I0527 03:20:12.154742 2715 state_mem.go:36] "Initialized new in-memory state store" May 27 03:20:12.155042 kubelet[2715]: I0527 03:20:12.154906 2715 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:20:12.155042 kubelet[2715]: I0527 03:20:12.154918 2715 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:20:12.155042 kubelet[2715]: I0527 03:20:12.154959 2715 policy_none.go:49] "None policy: Start" May 27 03:20:12.155511 kubelet[2715]: I0527 03:20:12.155460 2715 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 03:20:12.155511 kubelet[2715]: I0527 03:20:12.155492 2715 state_mem.go:35] "Initializing new in-memory state store" May 27 03:20:12.155668 kubelet[2715]: I0527 03:20:12.155611 2715 state_mem.go:75] "Updated machine memory state" May 27 03:20:12.160555 kubelet[2715]: I0527 03:20:12.160523 2715 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:20:12.160820 kubelet[2715]: I0527 03:20:12.160781 2715 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:20:12.160820 kubelet[2715]: I0527 03:20:12.160799 2715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:20:12.161045 kubelet[2715]: I0527 03:20:12.161024 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:20:12.263099 kubelet[2715]: I0527 03:20:12.262931 2715 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 03:20:12.297607 kubelet[2715]: I0527 03:20:12.297536 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:12.297607 kubelet[2715]: I0527 03:20:12.297582 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 27 03:20:12.297607 kubelet[2715]: I0527 03:20:12.297608 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dd1830d4a3db8e25bfd5a98c09ab949-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dd1830d4a3db8e25bfd5a98c09ab949\") " pod="kube-system/kube-apiserver-localhost" May 27 03:20:12.297842 kubelet[2715]: I0527 03:20:12.297628 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dd1830d4a3db8e25bfd5a98c09ab949-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6dd1830d4a3db8e25bfd5a98c09ab949\") " pod="kube-system/kube-apiserver-localhost" May 27 03:20:12.297842 kubelet[2715]: I0527 03:20:12.297691 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:12.297842 kubelet[2715]: I0527 03:20:12.297713 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:12.297842 kubelet[2715]: I0527 03:20:12.297782 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dd1830d4a3db8e25bfd5a98c09ab949-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6dd1830d4a3db8e25bfd5a98c09ab949\") " pod="kube-system/kube-apiserver-localhost" May 27 03:20:12.297842 kubelet[2715]: I0527 03:20:12.297803 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:12.298039 kubelet[2715]: I0527 03:20:12.297856 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:20:12.309619 kubelet[2715]: E0527 03:20:12.309329 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:20:12.312263 kubelet[2715]: I0527 03:20:12.312232 2715 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 27 03:20:12.312345 kubelet[2715]: I0527 03:20:12.312320 2715 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 27 03:20:13.089469 kubelet[2715]: I0527 03:20:13.089420 2715 apiserver.go:52] "Watching apiserver" May 27 03:20:13.097152 kubelet[2715]: I0527 03:20:13.097084 2715 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 03:20:13.225533 kubelet[2715]: I0527 03:20:13.225401 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.225386024 podStartE2EDuration="3.225386024s" podCreationTimestamp="2025-05-27 03:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:20:13.225140053 +0000 UTC m=+1.189215754" watchObservedRunningTime="2025-05-27 03:20:13.225386024 +0000 UTC m=+1.189461725" May 27 03:20:13.225871 kubelet[2715]: E0527 03:20:13.225604 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:20:13.226449 kubelet[2715]: E0527 03:20:13.226397 2715 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:20:13.281528 kubelet[2715]: I0527 03:20:13.280891 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.280870456 podStartE2EDuration="1.280870456s" podCreationTimestamp="2025-05-27 03:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:20:13.243222549 +0000 UTC m=+1.207298281" watchObservedRunningTime="2025-05-27 03:20:13.280870456 +0000 UTC m=+1.244946147" May 27 03:20:13.347356 kubelet[2715]: I0527 03:20:13.346820 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.346799105 podStartE2EDuration="1.346799105s" podCreationTimestamp="2025-05-27 03:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:20:13.281446861 +0000 UTC m=+1.245522562" watchObservedRunningTime="2025-05-27 03:20:13.346799105 +0000 UTC m=+1.310874806" May 27 03:20:17.067819 kubelet[2715]: I0527 03:20:17.067769 2715 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:20:17.068528 containerd[1589]: time="2025-05-27T03:20:17.068218652Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:20:17.068861 kubelet[2715]: I0527 03:20:17.068638 2715 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:20:17.680481 systemd[1]: Created slice kubepods-besteffort-pod1c787149_ca38_4611_8462_86f80a255aef.slice - libcontainer container kubepods-besteffort-pod1c787149_ca38_4611_8462_86f80a255aef.slice. May 27 03:20:17.726713 kubelet[2715]: I0527 03:20:17.726651 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c787149-ca38-4611-8462-86f80a255aef-kube-proxy\") pod \"kube-proxy-ftfhz\" (UID: \"1c787149-ca38-4611-8462-86f80a255aef\") " pod="kube-system/kube-proxy-ftfhz" May 27 03:20:17.726713 kubelet[2715]: I0527 03:20:17.726711 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c787149-ca38-4611-8462-86f80a255aef-xtables-lock\") pod \"kube-proxy-ftfhz\" (UID: \"1c787149-ca38-4611-8462-86f80a255aef\") " pod="kube-system/kube-proxy-ftfhz" May 27 03:20:17.726713 kubelet[2715]: I0527 03:20:17.726728 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c787149-ca38-4611-8462-86f80a255aef-lib-modules\") pod \"kube-proxy-ftfhz\" (UID: \"1c787149-ca38-4611-8462-86f80a255aef\") " pod="kube-system/kube-proxy-ftfhz" May 27 03:20:17.726713 kubelet[2715]: I0527 03:20:17.726744 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfz84\" (UniqueName: \"kubernetes.io/projected/1c787149-ca38-4611-8462-86f80a255aef-kube-api-access-cfz84\") pod \"kube-proxy-ftfhz\" (UID: \"1c787149-ca38-4611-8462-86f80a255aef\") " pod="kube-system/kube-proxy-ftfhz" May 27 03:20:17.832396 kubelet[2715]: E0527 03:20:17.832326 2715 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 03:20:17.832396 kubelet[2715]: E0527 03:20:17.832366 2715 projected.go:194] Error preparing data for projected volume kube-api-access-cfz84 for pod kube-system/kube-proxy-ftfhz: configmap "kube-root-ca.crt" not found May 27 03:20:17.832590 kubelet[2715]: E0527 03:20:17.832444 2715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c787149-ca38-4611-8462-86f80a255aef-kube-api-access-cfz84 podName:1c787149-ca38-4611-8462-86f80a255aef nodeName:}" failed. No retries permitted until 2025-05-27 03:20:18.332410009 +0000 UTC m=+6.296485710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cfz84" (UniqueName: "kubernetes.io/projected/1c787149-ca38-4611-8462-86f80a255aef-kube-api-access-cfz84") pod "kube-proxy-ftfhz" (UID: "1c787149-ca38-4611-8462-86f80a255aef") : configmap "kube-root-ca.crt" not found May 27 03:20:18.152548 systemd[1]: Created slice kubepods-besteffort-pod1f371cea_b2d4_4664_8508_241e5887e213.slice - libcontainer container kubepods-besteffort-pod1f371cea_b2d4_4664_8508_241e5887e213.slice. May 27 03:20:18.230305 kubelet[2715]: I0527 03:20:18.230253 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f371cea-b2d4-4664-8508-241e5887e213-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-jtk89\" (UID: \"1f371cea-b2d4-4664-8508-241e5887e213\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jtk89" May 27 03:20:18.230305 kubelet[2715]: I0527 03:20:18.230300 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75gk\" (UniqueName: \"kubernetes.io/projected/1f371cea-b2d4-4664-8508-241e5887e213-kube-api-access-h75gk\") pod \"tigera-operator-7c5755cdcb-jtk89\" (UID: \"1f371cea-b2d4-4664-8508-241e5887e213\") " pod="tigera-operator/tigera-operator-7c5755cdcb-jtk89" May 27 03:20:18.457298 containerd[1589]: time="2025-05-27T03:20:18.457163508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jtk89,Uid:1f371cea-b2d4-4664-8508-241e5887e213,Namespace:tigera-operator,Attempt:0,}" May 27 03:20:18.573631 containerd[1589]: time="2025-05-27T03:20:18.573578897Z" level=info msg="connecting to shim 7dafcf25fa74fa832e3213e696f28b4da5e5a26a437faf7773845a1b1b05eb74" address="unix:///run/containerd/s/c8ab300780b975e86c802a41135023d70a9ad202cfe3062c0c04ac6e9e034bb0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:18.593129 containerd[1589]: time="2025-05-27T03:20:18.593076664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ftfhz,Uid:1c787149-ca38-4611-8462-86f80a255aef,Namespace:kube-system,Attempt:0,}" May 27 03:20:18.615101 systemd[1]: Started cri-containerd-7dafcf25fa74fa832e3213e696f28b4da5e5a26a437faf7773845a1b1b05eb74.scope - libcontainer container 7dafcf25fa74fa832e3213e696f28b4da5e5a26a437faf7773845a1b1b05eb74. May 27 03:20:18.623939 containerd[1589]: time="2025-05-27T03:20:18.623883188Z" level=info msg="connecting to shim 53f0fb77f83ba8407f265b297caafc6087d51807570b6aa105e20357e9e28926" address="unix:///run/containerd/s/09b39269bdaf9e7bb42a25e29cbd43b254ba85273127653943a85a82a8890e5a" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:18.658238 systemd[1]: Started cri-containerd-53f0fb77f83ba8407f265b297caafc6087d51807570b6aa105e20357e9e28926.scope - libcontainer container 53f0fb77f83ba8407f265b297caafc6087d51807570b6aa105e20357e9e28926. May 27 03:20:18.680345 containerd[1589]: time="2025-05-27T03:20:18.680204543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-jtk89,Uid:1f371cea-b2d4-4664-8508-241e5887e213,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7dafcf25fa74fa832e3213e696f28b4da5e5a26a437faf7773845a1b1b05eb74\"" May 27 03:20:18.683908 containerd[1589]: time="2025-05-27T03:20:18.683872401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:20:18.693830 containerd[1589]: time="2025-05-27T03:20:18.693787061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ftfhz,Uid:1c787149-ca38-4611-8462-86f80a255aef,Namespace:kube-system,Attempt:0,} returns sandbox id \"53f0fb77f83ba8407f265b297caafc6087d51807570b6aa105e20357e9e28926\"" May 27 03:20:18.696456 containerd[1589]: time="2025-05-27T03:20:18.696399749Z" level=info msg="CreateContainer within sandbox \"53f0fb77f83ba8407f265b297caafc6087d51807570b6aa105e20357e9e28926\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:20:18.710381 containerd[1589]: time="2025-05-27T03:20:18.710266057Z" level=info msg="Container fd550a194e8e0fe9db2245d9c31e4458c0efb145ab6ce30febbe830089f721bd: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:18.721641 containerd[1589]: time="2025-05-27T03:20:18.721593990Z" level=info msg="CreateContainer within sandbox \"53f0fb77f83ba8407f265b297caafc6087d51807570b6aa105e20357e9e28926\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd550a194e8e0fe9db2245d9c31e4458c0efb145ab6ce30febbe830089f721bd\"" May 27 03:20:18.723150 containerd[1589]: time="2025-05-27T03:20:18.723046728Z" level=info msg="StartContainer for \"fd550a194e8e0fe9db2245d9c31e4458c0efb145ab6ce30febbe830089f721bd\"" May 27 03:20:18.725736 containerd[1589]: time="2025-05-27T03:20:18.725701165Z" level=info msg="connecting to shim fd550a194e8e0fe9db2245d9c31e4458c0efb145ab6ce30febbe830089f721bd" address="unix:///run/containerd/s/09b39269bdaf9e7bb42a25e29cbd43b254ba85273127653943a85a82a8890e5a" protocol=ttrpc version=3 May 27 03:20:18.754123 systemd[1]: Started cri-containerd-fd550a194e8e0fe9db2245d9c31e4458c0efb145ab6ce30febbe830089f721bd.scope - libcontainer container fd550a194e8e0fe9db2245d9c31e4458c0efb145ab6ce30febbe830089f721bd. May 27 03:20:18.801218 containerd[1589]: time="2025-05-27T03:20:18.801160673Z" level=info msg="StartContainer for \"fd550a194e8e0fe9db2245d9c31e4458c0efb145ab6ce30febbe830089f721bd\" returns successfully" May 27 03:20:22.077851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489257974.mount: Deactivated successfully. May 27 03:20:22.621027 containerd[1589]: time="2025-05-27T03:20:22.620943477Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:22.648496 containerd[1589]: time="2025-05-27T03:20:22.648447051Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:20:22.662509 containerd[1589]: time="2025-05-27T03:20:22.662447539Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:22.669738 containerd[1589]: time="2025-05-27T03:20:22.669689268Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:22.670562 containerd[1589]: time="2025-05-27T03:20:22.670508483Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.986596517s" May 27 03:20:22.670562 containerd[1589]: time="2025-05-27T03:20:22.670547367Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:20:22.674032 containerd[1589]: time="2025-05-27T03:20:22.672743346Z" level=info msg="CreateContainer within sandbox \"7dafcf25fa74fa832e3213e696f28b4da5e5a26a437faf7773845a1b1b05eb74\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:20:22.712550 containerd[1589]: time="2025-05-27T03:20:22.712495493Z" level=info msg="Container 6b770eb32ee9ac3c0e7ac5b8d24a391c5ec811c2cbfb16e21a118f5d5d254fa8: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:22.749516 containerd[1589]: time="2025-05-27T03:20:22.749453315Z" level=info msg="CreateContainer within sandbox \"7dafcf25fa74fa832e3213e696f28b4da5e5a26a437faf7773845a1b1b05eb74\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6b770eb32ee9ac3c0e7ac5b8d24a391c5ec811c2cbfb16e21a118f5d5d254fa8\"" May 27 03:20:22.750237 containerd[1589]: time="2025-05-27T03:20:22.750061961Z" level=info msg="StartContainer for \"6b770eb32ee9ac3c0e7ac5b8d24a391c5ec811c2cbfb16e21a118f5d5d254fa8\"" May 27 03:20:22.750897 containerd[1589]: time="2025-05-27T03:20:22.750866728Z" level=info msg="connecting to shim 6b770eb32ee9ac3c0e7ac5b8d24a391c5ec811c2cbfb16e21a118f5d5d254fa8" address="unix:///run/containerd/s/c8ab300780b975e86c802a41135023d70a9ad202cfe3062c0c04ac6e9e034bb0" protocol=ttrpc version=3 May 27 03:20:22.822193 systemd[1]: Started cri-containerd-6b770eb32ee9ac3c0e7ac5b8d24a391c5ec811c2cbfb16e21a118f5d5d254fa8.scope - libcontainer container 6b770eb32ee9ac3c0e7ac5b8d24a391c5ec811c2cbfb16e21a118f5d5d254fa8. May 27 03:20:22.873406 containerd[1589]: time="2025-05-27T03:20:22.873070324Z" level=info msg="StartContainer for \"6b770eb32ee9ac3c0e7ac5b8d24a391c5ec811c2cbfb16e21a118f5d5d254fa8\" returns successfully" May 27 03:20:23.192155 kubelet[2715]: I0527 03:20:23.191637 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ftfhz" podStartSLOduration=6.191608727 podStartE2EDuration="6.191608727s" podCreationTimestamp="2025-05-27 03:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:20:19.178483953 +0000 UTC m=+7.142559654" watchObservedRunningTime="2025-05-27 03:20:23.191608727 +0000 UTC m=+11.155684438" May 27 03:20:25.280786 update_engine[1574]: I20250527 03:20:25.280692 1574 update_attempter.cc:509] Updating boot flags... May 27 03:20:29.523523 sudo[1800]: pam_unix(sudo:session): session closed for user root May 27 03:20:29.529957 sshd[1799]: Connection closed by 10.0.0.1 port 40772 May 27 03:20:29.531892 sshd-session[1797]: pam_unix(sshd:session): session closed for user core May 27 03:20:29.539450 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:40772.service: Deactivated successfully. May 27 03:20:29.546263 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:20:29.547043 systemd[1]: session-7.scope: Consumed 4.490s CPU time, 222.4M memory peak. May 27 03:20:29.548788 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. May 27 03:20:29.551940 systemd-logind[1573]: Removed session 7. May 27 03:20:33.277967 kubelet[2715]: I0527 03:20:33.277872 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-jtk89" podStartSLOduration=11.289886403 podStartE2EDuration="15.27785071s" podCreationTimestamp="2025-05-27 03:20:18 +0000 UTC" firstStartedPulling="2025-05-27 03:20:18.683307044 +0000 UTC m=+6.647382745" lastFinishedPulling="2025-05-27 03:20:22.671271351 +0000 UTC m=+10.635347052" observedRunningTime="2025-05-27 03:20:23.192181314 +0000 UTC m=+11.156257025" watchObservedRunningTime="2025-05-27 03:20:33.27785071 +0000 UTC m=+21.241926411" May 27 03:20:33.290457 systemd[1]: Created slice kubepods-besteffort-pod76bc1144_ce30_4c84_b8f6_42d510828135.slice - libcontainer container kubepods-besteffort-pod76bc1144_ce30_4c84_b8f6_42d510828135.slice. May 27 03:20:33.361222 kubelet[2715]: W0527 03:20:33.361165 2715 reflector.go:561] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 27 03:20:33.361384 kubelet[2715]: E0527 03:20:33.361226 2715 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 27 03:20:33.361384 kubelet[2715]: W0527 03:20:33.361301 2715 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 27 03:20:33.361384 kubelet[2715]: E0527 03:20:33.361313 2715 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 27 03:20:33.368805 systemd[1]: Created slice kubepods-besteffort-pod5f6e7bfa_3807_4263_92f2_e692509b92fa.slice - libcontainer container kubepods-besteffort-pod5f6e7bfa_3807_4263_92f2_e692509b92fa.slice. May 27 03:20:33.451486 kubelet[2715]: I0527 03:20:33.451422 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/76bc1144-ce30-4c84-b8f6-42d510828135-typha-certs\") pod \"calico-typha-6bbc68648-mrfk8\" (UID: \"76bc1144-ce30-4c84-b8f6-42d510828135\") " pod="calico-system/calico-typha-6bbc68648-mrfk8" May 27 03:20:33.451486 kubelet[2715]: I0527 03:20:33.451473 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2cg\" (UniqueName: \"kubernetes.io/projected/76bc1144-ce30-4c84-b8f6-42d510828135-kube-api-access-wc2cg\") pod \"calico-typha-6bbc68648-mrfk8\" (UID: \"76bc1144-ce30-4c84-b8f6-42d510828135\") " pod="calico-system/calico-typha-6bbc68648-mrfk8" May 27 03:20:33.451486 kubelet[2715]: I0527 03:20:33.451509 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-cni-bin-dir\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451770 kubelet[2715]: I0527 03:20:33.451528 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-cni-net-dir\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451770 kubelet[2715]: I0527 03:20:33.451544 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76bc1144-ce30-4c84-b8f6-42d510828135-tigera-ca-bundle\") pod \"calico-typha-6bbc68648-mrfk8\" (UID: \"76bc1144-ce30-4c84-b8f6-42d510828135\") " pod="calico-system/calico-typha-6bbc68648-mrfk8" May 27 03:20:33.451770 kubelet[2715]: I0527 03:20:33.451564 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsdjt\" (UniqueName: \"kubernetes.io/projected/5f6e7bfa-3807-4263-92f2-e692509b92fa-kube-api-access-tsdjt\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451770 kubelet[2715]: I0527 03:20:33.451592 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-policysync\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451770 kubelet[2715]: I0527 03:20:33.451618 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-var-run-calico\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451889 kubelet[2715]: I0527 03:20:33.451637 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-xtables-lock\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451889 kubelet[2715]: I0527 03:20:33.451673 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-cni-log-dir\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451889 kubelet[2715]: I0527 03:20:33.451743 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-flexvol-driver-host\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451889 kubelet[2715]: I0527 03:20:33.451758 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-lib-modules\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.451889 kubelet[2715]: I0527 03:20:33.451773 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f6e7bfa-3807-4263-92f2-e692509b92fa-var-lib-calico\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.452059 kubelet[2715]: I0527 03:20:33.451797 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6e7bfa-3807-4263-92f2-e692509b92fa-tigera-ca-bundle\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.452059 kubelet[2715]: I0527 03:20:33.451819 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5f6e7bfa-3807-4263-92f2-e692509b92fa-node-certs\") pod \"calico-node-s6n2p\" (UID: \"5f6e7bfa-3807-4263-92f2-e692509b92fa\") " pod="calico-system/calico-node-s6n2p" May 27 03:20:33.470026 kubelet[2715]: E0527 03:20:33.469909 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:33.554140 kubelet[2715]: E0527 03:20:33.554021 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.554140 kubelet[2715]: W0527 03:20:33.554052 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.554140 kubelet[2715]: E0527 03:20:33.554098 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.554331 kubelet[2715]: E0527 03:20:33.554287 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.554331 kubelet[2715]: W0527 03:20:33.554298 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.554331 kubelet[2715]: E0527 03:20:33.554308 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.556005 kubelet[2715]: E0527 03:20:33.554547 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.556005 kubelet[2715]: W0527 03:20:33.554561 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.556005 kubelet[2715]: E0527 03:20:33.554573 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.556005 kubelet[2715]: E0527 03:20:33.554943 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.556005 kubelet[2715]: W0527 03:20:33.554954 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.556005 kubelet[2715]: E0527 03:20:33.554965 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.556005 kubelet[2715]: E0527 03:20:33.555310 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.556005 kubelet[2715]: W0527 03:20:33.555321 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.556005 kubelet[2715]: E0527 03:20:33.555417 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.556005 kubelet[2715]: E0527 03:20:33.555784 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.556290 kubelet[2715]: W0527 03:20:33.555795 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.556290 kubelet[2715]: E0527 03:20:33.555806 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.556290 kubelet[2715]: E0527 03:20:33.556120 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.556290 kubelet[2715]: W0527 03:20:33.556132 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.556290 kubelet[2715]: E0527 03:20:33.556144 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.556441 kubelet[2715]: E0527 03:20:33.556420 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.556477 kubelet[2715]: W0527 03:20:33.556461 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.556510 kubelet[2715]: E0527 03:20:33.556477 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.562095 kubelet[2715]: E0527 03:20:33.562063 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.562095 kubelet[2715]: W0527 03:20:33.562087 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.562226 kubelet[2715]: E0527 03:20:33.562107 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.565219 kubelet[2715]: E0527 03:20:33.565186 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.565219 kubelet[2715]: W0527 03:20:33.565201 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.565382 kubelet[2715]: E0527 03:20:33.565323 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.565583 kubelet[2715]: E0527 03:20:33.565567 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.565583 kubelet[2715]: W0527 03:20:33.565579 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.565671 kubelet[2715]: E0527 03:20:33.565654 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.565752 kubelet[2715]: E0527 03:20:33.565738 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.566094 kubelet[2715]: W0527 03:20:33.565747 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.566140 kubelet[2715]: E0527 03:20:33.566104 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.566491 kubelet[2715]: E0527 03:20:33.566424 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.566491 kubelet[2715]: W0527 03:20:33.566437 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.566491 kubelet[2715]: E0527 03:20:33.566447 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.566783 kubelet[2715]: E0527 03:20:33.566759 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.566783 kubelet[2715]: W0527 03:20:33.566773 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.566783 kubelet[2715]: E0527 03:20:33.566784 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.568118 kubelet[2715]: E0527 03:20:33.566942 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.568118 kubelet[2715]: W0527 03:20:33.566950 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.568118 kubelet[2715]: E0527 03:20:33.566968 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.569129 kubelet[2715]: E0527 03:20:33.569109 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.569129 kubelet[2715]: W0527 03:20:33.569125 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.569222 kubelet[2715]: E0527 03:20:33.569141 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.569384 kubelet[2715]: E0527 03:20:33.569364 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.569384 kubelet[2715]: W0527 03:20:33.569377 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.569497 kubelet[2715]: E0527 03:20:33.569475 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.569623 kubelet[2715]: E0527 03:20:33.569605 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.569683 kubelet[2715]: W0527 03:20:33.569625 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.569683 kubelet[2715]: E0527 03:20:33.569648 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.569851 kubelet[2715]: E0527 03:20:33.569839 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.569851 kubelet[2715]: W0527 03:20:33.569848 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.570033 kubelet[2715]: E0527 03:20:33.569867 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.571076 kubelet[2715]: E0527 03:20:33.570332 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.571076 kubelet[2715]: W0527 03:20:33.570340 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.571076 kubelet[2715]: E0527 03:20:33.570347 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.572187 kubelet[2715]: E0527 03:20:33.572159 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.572277 kubelet[2715]: W0527 03:20:33.572261 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.572379 kubelet[2715]: E0527 03:20:33.572348 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.572828 kubelet[2715]: E0527 03:20:33.572771 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.572957 kubelet[2715]: W0527 03:20:33.572942 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.573401 kubelet[2715]: E0527 03:20:33.573382 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.573737 kubelet[2715]: E0527 03:20:33.573712 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.573846 kubelet[2715]: W0527 03:20:33.573831 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.573960 kubelet[2715]: E0527 03:20:33.573944 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.575450 kubelet[2715]: E0527 03:20:33.575427 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.575450 kubelet[2715]: W0527 03:20:33.575444 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.575450 kubelet[2715]: E0527 03:20:33.575457 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.575797 kubelet[2715]: E0527 03:20:33.575757 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.575797 kubelet[2715]: W0527 03:20:33.575774 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.575797 kubelet[2715]: E0527 03:20:33.575786 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.576161 kubelet[2715]: E0527 03:20:33.576099 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.576161 kubelet[2715]: W0527 03:20:33.576114 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.576161 kubelet[2715]: E0527 03:20:33.576125 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.576342 kubelet[2715]: E0527 03:20:33.576323 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.576342 kubelet[2715]: W0527 03:20:33.576336 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.576467 kubelet[2715]: E0527 03:20:33.576350 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.576767 kubelet[2715]: E0527 03:20:33.576738 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.576767 kubelet[2715]: W0527 03:20:33.576752 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.576767 kubelet[2715]: E0527 03:20:33.576763 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.577070 kubelet[2715]: E0527 03:20:33.577048 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.577134 kubelet[2715]: W0527 03:20:33.577073 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.577134 kubelet[2715]: E0527 03:20:33.577085 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.577828 kubelet[2715]: E0527 03:20:33.577794 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.577828 kubelet[2715]: W0527 03:20:33.577811 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.577828 kubelet[2715]: E0527 03:20:33.577823 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.578443 kubelet[2715]: E0527 03:20:33.578421 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.578443 kubelet[2715]: W0527 03:20:33.578437 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.578530 kubelet[2715]: E0527 03:20:33.578450 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.597610 containerd[1589]: time="2025-05-27T03:20:33.597565217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bbc68648-mrfk8,Uid:76bc1144-ce30-4c84-b8f6-42d510828135,Namespace:calico-system,Attempt:0,}" May 27 03:20:33.653908 kubelet[2715]: E0527 03:20:33.653753 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.653908 kubelet[2715]: W0527 03:20:33.653779 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.653908 kubelet[2715]: E0527 03:20:33.653801 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.653908 kubelet[2715]: I0527 03:20:33.653832 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a9973ce5-5b08-44e5-b570-c0fc70d71d29-kubelet-dir\") pod \"csi-node-driver-k6cj4\" (UID: \"a9973ce5-5b08-44e5-b570-c0fc70d71d29\") " pod="calico-system/csi-node-driver-k6cj4" May 27 03:20:33.654138 kubelet[2715]: E0527 03:20:33.654050 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.654138 kubelet[2715]: W0527 03:20:33.654060 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.654138 kubelet[2715]: E0527 03:20:33.654081 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.654138 kubelet[2715]: I0527 03:20:33.654094 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a9973ce5-5b08-44e5-b570-c0fc70d71d29-varrun\") pod \"csi-node-driver-k6cj4\" (UID: \"a9973ce5-5b08-44e5-b570-c0fc70d71d29\") " pod="calico-system/csi-node-driver-k6cj4" May 27 03:20:33.654617 kubelet[2715]: E0527 03:20:33.654398 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.654617 kubelet[2715]: W0527 03:20:33.654414 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.654617 kubelet[2715]: E0527 03:20:33.654431 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.654617 kubelet[2715]: I0527 03:20:33.654446 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9973ce5-5b08-44e5-b570-c0fc70d71d29-socket-dir\") pod \"csi-node-driver-k6cj4\" (UID: \"a9973ce5-5b08-44e5-b570-c0fc70d71d29\") " pod="calico-system/csi-node-driver-k6cj4" May 27 03:20:33.655122 kubelet[2715]: E0527 03:20:33.655069 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.655122 kubelet[2715]: W0527 03:20:33.655096 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.655250 kubelet[2715]: E0527 03:20:33.655213 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.656006 kubelet[2715]: E0527 03:20:33.655458 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.656006 kubelet[2715]: W0527 03:20:33.655472 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.656006 kubelet[2715]: E0527 03:20:33.655491 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.656006 kubelet[2715]: E0527 03:20:33.655707 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.656006 kubelet[2715]: W0527 03:20:33.655722 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.656006 kubelet[2715]: E0527 03:20:33.655743 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.656006 kubelet[2715]: I0527 03:20:33.655781 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9973ce5-5b08-44e5-b570-c0fc70d71d29-registration-dir\") pod \"csi-node-driver-k6cj4\" (UID: \"a9973ce5-5b08-44e5-b570-c0fc70d71d29\") " pod="calico-system/csi-node-driver-k6cj4" May 27 03:20:33.656192 kubelet[2715]: E0527 03:20:33.656131 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.656192 kubelet[2715]: W0527 03:20:33.656143 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.656192 kubelet[2715]: E0527 03:20:33.656180 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.656484 kubelet[2715]: E0527 03:20:33.656465 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.656518 kubelet[2715]: W0527 03:20:33.656484 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.656518 kubelet[2715]: E0527 03:20:33.656494 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.656892 kubelet[2715]: E0527 03:20:33.656870 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.656892 kubelet[2715]: W0527 03:20:33.656890 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.656964 kubelet[2715]: E0527 03:20:33.656912 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.657149 kubelet[2715]: I0527 03:20:33.657124 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmsp\" (UniqueName: \"kubernetes.io/projected/a9973ce5-5b08-44e5-b570-c0fc70d71d29-kube-api-access-jbmsp\") pod \"csi-node-driver-k6cj4\" (UID: \"a9973ce5-5b08-44e5-b570-c0fc70d71d29\") " pod="calico-system/csi-node-driver-k6cj4" May 27 03:20:33.657219 kubelet[2715]: E0527 03:20:33.657204 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.657219 kubelet[2715]: W0527 03:20:33.657212 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.657287 kubelet[2715]: E0527 03:20:33.657226 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.657435 kubelet[2715]: E0527 03:20:33.657414 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.657435 kubelet[2715]: W0527 03:20:33.657427 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.657435 kubelet[2715]: E0527 03:20:33.657437 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.657628 kubelet[2715]: E0527 03:20:33.657604 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.657628 kubelet[2715]: W0527 03:20:33.657614 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.657628 kubelet[2715]: E0527 03:20:33.657625 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.657840 kubelet[2715]: E0527 03:20:33.657795 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.657840 kubelet[2715]: W0527 03:20:33.657803 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.657840 kubelet[2715]: E0527 03:20:33.657830 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.657912 containerd[1589]: time="2025-05-27T03:20:33.657611668Z" level=info msg="connecting to shim 9982288b84f9ef62dc5a2b781ec6bffe3c5fb855bb1c26818921a30ef88b497d" address="unix:///run/containerd/s/513d132877b901776117746c0e108fdb5501d37cd7ed480926ad31747cfd5a53" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:33.658125 kubelet[2715]: E0527 03:20:33.658107 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.658125 kubelet[2715]: W0527 03:20:33.658124 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.658191 kubelet[2715]: E0527 03:20:33.658135 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.658427 kubelet[2715]: E0527 03:20:33.658409 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.658427 kubelet[2715]: W0527 03:20:33.658424 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.658493 kubelet[2715]: E0527 03:20:33.658442 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.658747 kubelet[2715]: E0527 03:20:33.658728 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.658747 kubelet[2715]: W0527 03:20:33.658743 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.658807 kubelet[2715]: E0527 03:20:33.658756 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.686266 systemd[1]: Started cri-containerd-9982288b84f9ef62dc5a2b781ec6bffe3c5fb855bb1c26818921a30ef88b497d.scope - libcontainer container 9982288b84f9ef62dc5a2b781ec6bffe3c5fb855bb1c26818921a30ef88b497d. May 27 03:20:33.760299 kubelet[2715]: E0527 03:20:33.760256 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.760299 kubelet[2715]: W0527 03:20:33.760280 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.760299 kubelet[2715]: E0527 03:20:33.760301 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.760570 kubelet[2715]: E0527 03:20:33.760541 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.760570 kubelet[2715]: W0527 03:20:33.760551 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.760570 kubelet[2715]: E0527 03:20:33.760566 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.760826 kubelet[2715]: E0527 03:20:33.760804 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.760826 kubelet[2715]: W0527 03:20:33.760821 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.760904 kubelet[2715]: E0527 03:20:33.760837 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.761093 kubelet[2715]: E0527 03:20:33.761074 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.761093 kubelet[2715]: W0527 03:20:33.761088 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.761155 kubelet[2715]: E0527 03:20:33.761103 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.761318 kubelet[2715]: E0527 03:20:33.761289 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.761318 kubelet[2715]: W0527 03:20:33.761305 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.761318 kubelet[2715]: E0527 03:20:33.761319 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.761558 kubelet[2715]: E0527 03:20:33.761537 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.761558 kubelet[2715]: W0527 03:20:33.761553 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.761623 kubelet[2715]: E0527 03:20:33.761570 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.761790 kubelet[2715]: E0527 03:20:33.761771 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.761790 kubelet[2715]: W0527 03:20:33.761785 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.761862 kubelet[2715]: E0527 03:20:33.761811 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.761995 kubelet[2715]: E0527 03:20:33.761959 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.761995 kubelet[2715]: W0527 03:20:33.761973 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.762052 kubelet[2715]: E0527 03:20:33.762021 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.762223 kubelet[2715]: E0527 03:20:33.762195 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.762223 kubelet[2715]: W0527 03:20:33.762212 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.762223 kubelet[2715]: E0527 03:20:33.762229 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.763225 kubelet[2715]: E0527 03:20:33.763189 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.763292 kubelet[2715]: W0527 03:20:33.763222 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.763292 kubelet[2715]: E0527 03:20:33.763278 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.763625 kubelet[2715]: E0527 03:20:33.763595 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.763625 kubelet[2715]: W0527 03:20:33.763610 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.763796 kubelet[2715]: E0527 03:20:33.763641 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.763917 kubelet[2715]: E0527 03:20:33.763894 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.763917 kubelet[2715]: W0527 03:20:33.763909 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.763969 kubelet[2715]: E0527 03:20:33.763938 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.764163 kubelet[2715]: E0527 03:20:33.764137 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.764163 kubelet[2715]: W0527 03:20:33.764154 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.764277 kubelet[2715]: E0527 03:20:33.764186 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.764385 kubelet[2715]: E0527 03:20:33.764367 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.764385 kubelet[2715]: W0527 03:20:33.764382 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.764434 kubelet[2715]: E0527 03:20:33.764402 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.764641 kubelet[2715]: E0527 03:20:33.764625 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.764641 kubelet[2715]: W0527 03:20:33.764638 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.764711 kubelet[2715]: E0527 03:20:33.764658 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.764894 kubelet[2715]: E0527 03:20:33.764878 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.764894 kubelet[2715]: W0527 03:20:33.764892 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.764945 kubelet[2715]: E0527 03:20:33.764920 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.765255 kubelet[2715]: E0527 03:20:33.765214 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.765255 kubelet[2715]: W0527 03:20:33.765232 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.765364 kubelet[2715]: E0527 03:20:33.765281 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.765499 kubelet[2715]: E0527 03:20:33.765470 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.765499 kubelet[2715]: W0527 03:20:33.765486 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.765567 kubelet[2715]: E0527 03:20:33.765532 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.765756 kubelet[2715]: E0527 03:20:33.765723 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.765756 kubelet[2715]: W0527 03:20:33.765740 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.765841 kubelet[2715]: E0527 03:20:33.765784 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.766023 kubelet[2715]: E0527 03:20:33.765952 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.766023 kubelet[2715]: W0527 03:20:33.765970 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.766112 kubelet[2715]: E0527 03:20:33.766028 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.766241 kubelet[2715]: E0527 03:20:33.766214 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.766241 kubelet[2715]: W0527 03:20:33.766230 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.766302 kubelet[2715]: E0527 03:20:33.766259 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.766518 kubelet[2715]: E0527 03:20:33.766493 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.766518 kubelet[2715]: W0527 03:20:33.766509 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.766574 kubelet[2715]: E0527 03:20:33.766542 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.766849 kubelet[2715]: E0527 03:20:33.766815 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.766895 kubelet[2715]: W0527 03:20:33.766835 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.766895 kubelet[2715]: E0527 03:20:33.766886 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.767243 kubelet[2715]: E0527 03:20:33.767211 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.767243 kubelet[2715]: W0527 03:20:33.767227 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.767243 kubelet[2715]: E0527 03:20:33.767244 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.767518 kubelet[2715]: E0527 03:20:33.767489 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.767518 kubelet[2715]: W0527 03:20:33.767505 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.767585 kubelet[2715]: E0527 03:20:33.767522 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.767785 kubelet[2715]: E0527 03:20:33.767755 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.767785 kubelet[2715]: W0527 03:20:33.767770 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.767785 kubelet[2715]: E0527 03:20:33.767782 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.786830 containerd[1589]: time="2025-05-27T03:20:33.786767797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bbc68648-mrfk8,Uid:76bc1144-ce30-4c84-b8f6-42d510828135,Namespace:calico-system,Attempt:0,} returns sandbox id \"9982288b84f9ef62dc5a2b781ec6bffe3c5fb855bb1c26818921a30ef88b497d\"" May 27 03:20:33.798746 containerd[1589]: time="2025-05-27T03:20:33.798687987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:20:33.804415 kubelet[2715]: E0527 03:20:33.804283 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.804415 kubelet[2715]: W0527 03:20:33.804303 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.804415 kubelet[2715]: E0527 03:20:33.804320 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.864883 kubelet[2715]: E0527 03:20:33.864818 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.864883 kubelet[2715]: W0527 03:20:33.864842 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.864883 kubelet[2715]: E0527 03:20:33.864863 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:33.965928 kubelet[2715]: E0527 03:20:33.965890 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:33.965928 kubelet[2715]: W0527 03:20:33.965913 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:33.965928 kubelet[2715]: E0527 03:20:33.965934 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:34.067198 kubelet[2715]: E0527 03:20:34.067079 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:34.067198 kubelet[2715]: W0527 03:20:34.067106 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:34.067198 kubelet[2715]: E0527 03:20:34.067130 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:34.168543 kubelet[2715]: E0527 03:20:34.168499 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:34.168543 kubelet[2715]: W0527 03:20:34.168524 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:34.168543 kubelet[2715]: E0527 03:20:34.168545 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:34.269487 kubelet[2715]: E0527 03:20:34.269448 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:34.269487 kubelet[2715]: W0527 03:20:34.269474 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:34.269487 kubelet[2715]: E0527 03:20:34.269498 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:34.370138 kubelet[2715]: E0527 03:20:34.370088 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:34.370138 kubelet[2715]: W0527 03:20:34.370118 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:34.370138 kubelet[2715]: E0527 03:20:34.370142 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:34.410594 kubelet[2715]: E0527 03:20:34.410545 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:34.410594 kubelet[2715]: W0527 03:20:34.410572 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:34.410594 kubelet[2715]: E0527 03:20:34.410594 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:34.573069 containerd[1589]: time="2025-05-27T03:20:34.572940768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s6n2p,Uid:5f6e7bfa-3807-4263-92f2-e692509b92fa,Namespace:calico-system,Attempt:0,}" May 27 03:20:34.659127 containerd[1589]: time="2025-05-27T03:20:34.658949289Z" level=info msg="connecting to shim 71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f" address="unix:///run/containerd/s/422b5456e2db52d632ff893c4e1eb50d34c9eaf4c5893ac731eb7823dc65f2a0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:20:34.697147 systemd[1]: Started cri-containerd-71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f.scope - libcontainer container 71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f. May 27 03:20:34.766322 containerd[1589]: time="2025-05-27T03:20:34.766166271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s6n2p,Uid:5f6e7bfa-3807-4263-92f2-e692509b92fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f\"" May 27 03:20:35.122663 kubelet[2715]: E0527 03:20:35.122595 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:35.858414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4172184146.mount: Deactivated successfully. May 27 03:20:36.298806 containerd[1589]: time="2025-05-27T03:20:36.298606749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:36.300887 containerd[1589]: time="2025-05-27T03:20:36.300842663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:20:36.302667 containerd[1589]: time="2025-05-27T03:20:36.302620203Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:36.304847 containerd[1589]: time="2025-05-27T03:20:36.304799009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:36.305727 containerd[1589]: time="2025-05-27T03:20:36.305644763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.506907114s" May 27 03:20:36.305727 containerd[1589]: time="2025-05-27T03:20:36.305694368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:20:36.306720 containerd[1589]: time="2025-05-27T03:20:36.306678893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:20:36.316216 containerd[1589]: time="2025-05-27T03:20:36.316144453Z" level=info msg="CreateContainer within sandbox \"9982288b84f9ef62dc5a2b781ec6bffe3c5fb855bb1c26818921a30ef88b497d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:20:36.326517 containerd[1589]: time="2025-05-27T03:20:36.326449685Z" level=info msg="Container 0402b57a033ad31f8ba4f824dc7cb50fd51797f3180d2888b65dfd11b006783f: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:36.344271 containerd[1589]: time="2025-05-27T03:20:36.344191843Z" level=info msg="CreateContainer within sandbox \"9982288b84f9ef62dc5a2b781ec6bffe3c5fb855bb1c26818921a30ef88b497d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0402b57a033ad31f8ba4f824dc7cb50fd51797f3180d2888b65dfd11b006783f\"" May 27 03:20:36.345138 containerd[1589]: time="2025-05-27T03:20:36.345113040Z" level=info msg="StartContainer for \"0402b57a033ad31f8ba4f824dc7cb50fd51797f3180d2888b65dfd11b006783f\"" May 27 03:20:36.346436 containerd[1589]: time="2025-05-27T03:20:36.346385669Z" level=info msg="connecting to shim 0402b57a033ad31f8ba4f824dc7cb50fd51797f3180d2888b65dfd11b006783f" address="unix:///run/containerd/s/513d132877b901776117746c0e108fdb5501d37cd7ed480926ad31747cfd5a53" protocol=ttrpc version=3 May 27 03:20:36.378352 systemd[1]: Started cri-containerd-0402b57a033ad31f8ba4f824dc7cb50fd51797f3180d2888b65dfd11b006783f.scope - libcontainer container 0402b57a033ad31f8ba4f824dc7cb50fd51797f3180d2888b65dfd11b006783f. May 27 03:20:36.445788 containerd[1589]: time="2025-05-27T03:20:36.445732979Z" level=info msg="StartContainer for \"0402b57a033ad31f8ba4f824dc7cb50fd51797f3180d2888b65dfd11b006783f\" returns successfully" May 27 03:20:37.121704 kubelet[2715]: E0527 03:20:37.121622 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:37.203691 kubelet[2715]: E0527 03:20:37.203629 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.203691 kubelet[2715]: W0527 03:20:37.203656 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.203691 kubelet[2715]: E0527 03:20:37.203699 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.204149 kubelet[2715]: E0527 03:20:37.204094 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.204149 kubelet[2715]: W0527 03:20:37.204127 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.204279 kubelet[2715]: E0527 03:20:37.204160 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.204504 kubelet[2715]: E0527 03:20:37.204472 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.204504 kubelet[2715]: W0527 03:20:37.204488 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.204504 kubelet[2715]: E0527 03:20:37.204500 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.204746 kubelet[2715]: E0527 03:20:37.204715 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.204746 kubelet[2715]: W0527 03:20:37.204726 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.204746 kubelet[2715]: E0527 03:20:37.204737 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.204956 kubelet[2715]: E0527 03:20:37.204937 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.204956 kubelet[2715]: W0527 03:20:37.204951 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.205062 kubelet[2715]: E0527 03:20:37.204963 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.205343 kubelet[2715]: E0527 03:20:37.205315 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.205343 kubelet[2715]: W0527 03:20:37.205334 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.205466 kubelet[2715]: E0527 03:20:37.205350 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.205592 kubelet[2715]: E0527 03:20:37.205572 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.205592 kubelet[2715]: W0527 03:20:37.205587 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.205669 kubelet[2715]: E0527 03:20:37.205598 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.205855 kubelet[2715]: E0527 03:20:37.205817 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.205855 kubelet[2715]: W0527 03:20:37.205834 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.205855 kubelet[2715]: E0527 03:20:37.205845 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.206176 kubelet[2715]: E0527 03:20:37.206138 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.206176 kubelet[2715]: W0527 03:20:37.206173 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.206285 kubelet[2715]: E0527 03:20:37.206186 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.206501 kubelet[2715]: E0527 03:20:37.206469 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.206501 kubelet[2715]: W0527 03:20:37.206482 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.206501 kubelet[2715]: E0527 03:20:37.206494 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.206753 kubelet[2715]: E0527 03:20:37.206737 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.206753 kubelet[2715]: W0527 03:20:37.206752 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.206835 kubelet[2715]: E0527 03:20:37.206766 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.207055 kubelet[2715]: E0527 03:20:37.207037 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.207055 kubelet[2715]: W0527 03:20:37.207050 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.207134 kubelet[2715]: E0527 03:20:37.207061 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.207331 kubelet[2715]: E0527 03:20:37.207311 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.207331 kubelet[2715]: W0527 03:20:37.207324 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.207401 kubelet[2715]: E0527 03:20:37.207333 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.207549 kubelet[2715]: E0527 03:20:37.207535 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.207580 kubelet[2715]: W0527 03:20:37.207549 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.207580 kubelet[2715]: E0527 03:20:37.207558 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.207765 kubelet[2715]: E0527 03:20:37.207745 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.207765 kubelet[2715]: W0527 03:20:37.207757 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.207836 kubelet[2715]: E0527 03:20:37.207767 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.211384 kubelet[2715]: I0527 03:20:37.211282 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bbc68648-mrfk8" podStartSLOduration=1.7026087699999999 podStartE2EDuration="4.211264039s" podCreationTimestamp="2025-05-27 03:20:33 +0000 UTC" firstStartedPulling="2025-05-27 03:20:33.797903286 +0000 UTC m=+21.761978988" lastFinishedPulling="2025-05-27 03:20:36.306558556 +0000 UTC m=+24.270634257" observedRunningTime="2025-05-27 03:20:37.210774656 +0000 UTC m=+25.174850368" watchObservedRunningTime="2025-05-27 03:20:37.211264039 +0000 UTC m=+25.175339740" May 27 03:20:37.291084 kubelet[2715]: E0527 03:20:37.290510 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.291084 kubelet[2715]: W0527 03:20:37.290542 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.291084 kubelet[2715]: E0527 03:20:37.290563 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.291535 kubelet[2715]: E0527 03:20:37.291465 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.291535 kubelet[2715]: W0527 03:20:37.291476 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.291535 kubelet[2715]: E0527 03:20:37.291487 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.293209 kubelet[2715]: E0527 03:20:37.293110 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.293209 kubelet[2715]: W0527 03:20:37.293123 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.293209 kubelet[2715]: E0527 03:20:37.293144 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.293588 kubelet[2715]: E0527 03:20:37.293535 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.293588 kubelet[2715]: W0527 03:20:37.293584 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.293770 kubelet[2715]: E0527 03:20:37.293614 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.293882 kubelet[2715]: E0527 03:20:37.293858 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.293882 kubelet[2715]: W0527 03:20:37.293876 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.293935 kubelet[2715]: E0527 03:20:37.293913 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.294160 kubelet[2715]: E0527 03:20:37.294140 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.294160 kubelet[2715]: W0527 03:20:37.294154 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.294218 kubelet[2715]: E0527 03:20:37.294165 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.295109 kubelet[2715]: E0527 03:20:37.295087 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.295109 kubelet[2715]: W0527 03:20:37.295103 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.295192 kubelet[2715]: E0527 03:20:37.295169 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.295388 kubelet[2715]: E0527 03:20:37.295369 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.295388 kubelet[2715]: W0527 03:20:37.295383 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.295449 kubelet[2715]: E0527 03:20:37.295429 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.295678 kubelet[2715]: E0527 03:20:37.295651 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.296098 kubelet[2715]: W0527 03:20:37.295671 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.296344 kubelet[2715]: E0527 03:20:37.296133 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.297357 kubelet[2715]: E0527 03:20:37.296387 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.297357 kubelet[2715]: W0527 03:20:37.296401 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.297357 kubelet[2715]: E0527 03:20:37.296412 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.299111 kubelet[2715]: E0527 03:20:37.299081 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.299111 kubelet[2715]: W0527 03:20:37.299107 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.299189 kubelet[2715]: E0527 03:20:37.299128 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.299436 kubelet[2715]: E0527 03:20:37.299412 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.299436 kubelet[2715]: W0527 03:20:37.299432 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.299564 kubelet[2715]: E0527 03:20:37.299520 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.299669 kubelet[2715]: E0527 03:20:37.299649 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.299669 kubelet[2715]: W0527 03:20:37.299664 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.301023 kubelet[2715]: E0527 03:20:37.299839 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.302110 kubelet[2715]: E0527 03:20:37.302083 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.302110 kubelet[2715]: W0527 03:20:37.302105 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.302617 kubelet[2715]: E0527 03:20:37.302435 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.302806 kubelet[2715]: E0527 03:20:37.302784 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.302806 kubelet[2715]: W0527 03:20:37.302802 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.302920 kubelet[2715]: E0527 03:20:37.302900 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.305175 kubelet[2715]: E0527 03:20:37.305140 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.305175 kubelet[2715]: W0527 03:20:37.305169 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.305271 kubelet[2715]: E0527 03:20:37.305214 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.305513 kubelet[2715]: E0527 03:20:37.305489 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.305513 kubelet[2715]: W0527 03:20:37.305510 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.305569 kubelet[2715]: E0527 03:20:37.305523 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.308177 kubelet[2715]: E0527 03:20:37.308146 2715 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:20:37.308177 kubelet[2715]: W0527 03:20:37.308170 2715 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:20:37.308263 kubelet[2715]: E0527 03:20:37.308184 2715 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:20:37.621575 containerd[1589]: time="2025-05-27T03:20:37.621408943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:37.622436 containerd[1589]: time="2025-05-27T03:20:37.622338654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:20:37.624251 containerd[1589]: time="2025-05-27T03:20:37.624178051Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:37.627228 containerd[1589]: time="2025-05-27T03:20:37.627185396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:37.627752 containerd[1589]: time="2025-05-27T03:20:37.627712881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.321001255s" May 27 03:20:37.627752 containerd[1589]: time="2025-05-27T03:20:37.627749259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:20:37.630589 containerd[1589]: time="2025-05-27T03:20:37.630554303Z" level=info msg="CreateContainer within sandbox \"71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:20:37.646655 containerd[1589]: time="2025-05-27T03:20:37.646095656Z" level=info msg="Container 41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:37.659335 containerd[1589]: time="2025-05-27T03:20:37.659269497Z" level=info msg="CreateContainer within sandbox \"71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912\"" May 27 03:20:37.660038 containerd[1589]: time="2025-05-27T03:20:37.659975166Z" level=info msg="StartContainer for \"41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912\"" May 27 03:20:37.669698 containerd[1589]: time="2025-05-27T03:20:37.669643202Z" level=info msg="connecting to shim 41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912" address="unix:///run/containerd/s/422b5456e2db52d632ff893c4e1eb50d34c9eaf4c5893ac731eb7823dc65f2a0" protocol=ttrpc version=3 May 27 03:20:37.697179 systemd[1]: Started cri-containerd-41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912.scope - libcontainer container 41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912. May 27 03:20:37.752187 containerd[1589]: time="2025-05-27T03:20:37.752133253Z" level=info msg="StartContainer for \"41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912\" returns successfully" May 27 03:20:37.763936 systemd[1]: cri-containerd-41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912.scope: Deactivated successfully. May 27 03:20:37.766975 containerd[1589]: time="2025-05-27T03:20:37.766922207Z" level=info msg="received exit event container_id:\"41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912\" id:\"41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912\" pid:3423 exited_at:{seconds:1748316037 nanos:766505844}" May 27 03:20:37.767156 containerd[1589]: time="2025-05-27T03:20:37.767032966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912\" id:\"41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912\" pid:3423 exited_at:{seconds:1748316037 nanos:766505844}" May 27 03:20:37.797123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41a374b627834d8a1c0b769367a411faa797d3dfe000c8d1a85af618e0b45912-rootfs.mount: Deactivated successfully. May 27 03:20:38.200961 kubelet[2715]: I0527 03:20:38.200924 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:20:39.122401 kubelet[2715]: E0527 03:20:39.122314 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:39.205483 containerd[1589]: time="2025-05-27T03:20:39.205387289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:20:41.122051 kubelet[2715]: E0527 03:20:41.121997 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:43.121709 kubelet[2715]: E0527 03:20:43.121616 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:44.411250 containerd[1589]: time="2025-05-27T03:20:44.411192446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:44.412186 containerd[1589]: time="2025-05-27T03:20:44.412147422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:20:44.414041 containerd[1589]: time="2025-05-27T03:20:44.414001871Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:44.415908 containerd[1589]: time="2025-05-27T03:20:44.415846761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:44.416386 containerd[1589]: time="2025-05-27T03:20:44.416349757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 5.210918967s" May 27 03:20:44.416386 containerd[1589]: time="2025-05-27T03:20:44.416378000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:20:44.418436 containerd[1589]: time="2025-05-27T03:20:44.418393841Z" level=info msg="CreateContainer within sandbox \"71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:20:44.450720 containerd[1589]: time="2025-05-27T03:20:44.450659301Z" level=info msg="Container 704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:44.462686 containerd[1589]: time="2025-05-27T03:20:44.462626650Z" level=info msg="CreateContainer within sandbox \"71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd\"" May 27 03:20:44.463104 containerd[1589]: time="2025-05-27T03:20:44.463052722Z" level=info msg="StartContainer for \"704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd\"" May 27 03:20:44.464373 containerd[1589]: time="2025-05-27T03:20:44.464348049Z" level=info msg="connecting to shim 704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd" address="unix:///run/containerd/s/422b5456e2db52d632ff893c4e1eb50d34c9eaf4c5893ac731eb7823dc65f2a0" protocol=ttrpc version=3 May 27 03:20:44.491166 systemd[1]: Started cri-containerd-704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd.scope - libcontainer container 704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd. May 27 03:20:44.592812 containerd[1589]: time="2025-05-27T03:20:44.592756046Z" level=info msg="StartContainer for \"704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd\" returns successfully" May 27 03:20:45.122837 kubelet[2715]: E0527 03:20:45.122658 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:47.122011 kubelet[2715]: E0527 03:20:47.121897 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:47.320406 containerd[1589]: time="2025-05-27T03:20:47.320338948Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:20:47.323482 systemd[1]: cri-containerd-704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd.scope: Deactivated successfully. May 27 03:20:47.323895 systemd[1]: cri-containerd-704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd.scope: Consumed 603ms CPU time, 176.8M memory peak, 3.5M read from disk, 170.9M written to disk. May 27 03:20:47.324512 containerd[1589]: time="2025-05-27T03:20:47.324458563Z" level=info msg="received exit event container_id:\"704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd\" id:\"704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd\" pid:3485 exited_at:{seconds:1748316047 nanos:324207301}" May 27 03:20:47.324673 containerd[1589]: time="2025-05-27T03:20:47.324605911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd\" id:\"704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd\" pid:3485 exited_at:{seconds:1748316047 nanos:324207301}" May 27 03:20:47.346652 kubelet[2715]: I0527 03:20:47.346615 2715 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 03:20:47.346799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-704e4be2a59cb00805f94ee4ba5ae2272403d427889632c9609816904d5f47cd-rootfs.mount: Deactivated successfully. May 27 03:20:47.536712 systemd[1]: Created slice kubepods-besteffort-pod7595602a_5d40_40fb_9abf_9275aca2a744.slice - libcontainer container kubepods-besteffort-pod7595602a_5d40_40fb_9abf_9275aca2a744.slice. May 27 03:20:47.543385 systemd[1]: Created slice kubepods-burstable-pod6171242c_12ea_4dcd_9745_2bb963d34861.slice - libcontainer container kubepods-burstable-pod6171242c_12ea_4dcd_9745_2bb963d34861.slice. May 27 03:20:47.573858 systemd[1]: Created slice kubepods-besteffort-pode490ce4e_3d5d_487a_9163_dd7539885ded.slice - libcontainer container kubepods-besteffort-pode490ce4e_3d5d_487a_9163_dd7539885ded.slice. May 27 03:20:47.583084 systemd[1]: Created slice kubepods-burstable-pod4e64304e_2717_4b6f_b26a_edc7ae5aa9fb.slice - libcontainer container kubepods-burstable-pod4e64304e_2717_4b6f_b26a_edc7ae5aa9fb.slice. May 27 03:20:47.588551 systemd[1]: Created slice kubepods-besteffort-pod7f0983f5_57fc_4b64_bb4c_328c69709d91.slice - libcontainer container kubepods-besteffort-pod7f0983f5_57fc_4b64_bb4c_328c69709d91.slice. May 27 03:20:47.593831 systemd[1]: Created slice kubepods-besteffort-podb4cfd04d_a56e_44a1_9a4a_e5cfed06478d.slice - libcontainer container kubepods-besteffort-podb4cfd04d_a56e_44a1_9a4a_e5cfed06478d.slice. May 27 03:20:47.599193 systemd[1]: Created slice kubepods-besteffort-podf9153df7_a023_4862_a363_9cdd3429f7b2.slice - libcontainer container kubepods-besteffort-podf9153df7_a023_4862_a363_9cdd3429f7b2.slice. May 27 03:20:47.659136 kubelet[2715]: I0527 03:20:47.659068 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4cfd04d-a56e-44a1-9a4a-e5cfed06478d-config\") pod \"goldmane-8f77d7b6c-947mk\" (UID: \"b4cfd04d-a56e-44a1-9a4a-e5cfed06478d\") " pod="calico-system/goldmane-8f77d7b6c-947mk" May 27 03:20:47.659136 kubelet[2715]: I0527 03:20:47.659126 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9t9\" (UniqueName: \"kubernetes.io/projected/6171242c-12ea-4dcd-9745-2bb963d34861-kube-api-access-hw9t9\") pod \"coredns-7c65d6cfc9-dgnq8\" (UID: \"6171242c-12ea-4dcd-9745-2bb963d34861\") " pod="kube-system/coredns-7c65d6cfc9-dgnq8" May 27 03:20:47.659136 kubelet[2715]: I0527 03:20:47.659147 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e490ce4e-3d5d-487a-9163-dd7539885ded-calico-apiserver-certs\") pod \"calico-apiserver-f594547bd-v8kd9\" (UID: \"e490ce4e-3d5d-487a-9163-dd7539885ded\") " pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" May 27 03:20:47.659456 kubelet[2715]: I0527 03:20:47.659192 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-backend-key-pair\") pod \"whisker-76697885d7-g495m\" (UID: \"f9153df7-a023-4862-a363-9cdd3429f7b2\") " pod="calico-system/whisker-76697885d7-g495m" May 27 03:20:47.659456 kubelet[2715]: I0527 03:20:47.659248 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4cfd04d-a56e-44a1-9a4a-e5cfed06478d-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-947mk\" (UID: \"b4cfd04d-a56e-44a1-9a4a-e5cfed06478d\") " pod="calico-system/goldmane-8f77d7b6c-947mk" May 27 03:20:47.659456 kubelet[2715]: I0527 03:20:47.659269 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2px\" (UniqueName: \"kubernetes.io/projected/b4cfd04d-a56e-44a1-9a4a-e5cfed06478d-kube-api-access-sp2px\") pod \"goldmane-8f77d7b6c-947mk\" (UID: \"b4cfd04d-a56e-44a1-9a4a-e5cfed06478d\") " pod="calico-system/goldmane-8f77d7b6c-947mk" May 27 03:20:47.659456 kubelet[2715]: I0527 03:20:47.659291 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b4cfd04d-a56e-44a1-9a4a-e5cfed06478d-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-947mk\" (UID: \"b4cfd04d-a56e-44a1-9a4a-e5cfed06478d\") " pod="calico-system/goldmane-8f77d7b6c-947mk" May 27 03:20:47.659456 kubelet[2715]: I0527 03:20:47.659315 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br2w7\" (UniqueName: \"kubernetes.io/projected/7595602a-5d40-40fb-9abf-9275aca2a744-kube-api-access-br2w7\") pod \"calico-kube-controllers-5bccffd987-jvbth\" (UID: \"7595602a-5d40-40fb-9abf-9275aca2a744\") " pod="calico-system/calico-kube-controllers-5bccffd987-jvbth" May 27 03:20:47.659626 kubelet[2715]: I0527 03:20:47.659337 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e64304e-2717-4b6f-b26a-edc7ae5aa9fb-config-volume\") pod \"coredns-7c65d6cfc9-wc2hr\" (UID: \"4e64304e-2717-4b6f-b26a-edc7ae5aa9fb\") " pod="kube-system/coredns-7c65d6cfc9-wc2hr" May 27 03:20:47.659626 kubelet[2715]: I0527 03:20:47.659392 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f0983f5-57fc-4b64-bb4c-328c69709d91-calico-apiserver-certs\") pod \"calico-apiserver-f594547bd-ntfdc\" (UID: \"7f0983f5-57fc-4b64-bb4c-328c69709d91\") " pod="calico-apiserver/calico-apiserver-f594547bd-ntfdc" May 27 03:20:47.659626 kubelet[2715]: I0527 03:20:47.659418 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6171242c-12ea-4dcd-9745-2bb963d34861-config-volume\") pod \"coredns-7c65d6cfc9-dgnq8\" (UID: \"6171242c-12ea-4dcd-9745-2bb963d34861\") " pod="kube-system/coredns-7c65d6cfc9-dgnq8" May 27 03:20:47.659626 kubelet[2715]: I0527 03:20:47.659439 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhtkt\" (UniqueName: \"kubernetes.io/projected/e490ce4e-3d5d-487a-9163-dd7539885ded-kube-api-access-jhtkt\") pod \"calico-apiserver-f594547bd-v8kd9\" (UID: \"e490ce4e-3d5d-487a-9163-dd7539885ded\") " pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" May 27 03:20:47.659626 kubelet[2715]: I0527 03:20:47.659455 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7595602a-5d40-40fb-9abf-9275aca2a744-tigera-ca-bundle\") pod \"calico-kube-controllers-5bccffd987-jvbth\" (UID: \"7595602a-5d40-40fb-9abf-9275aca2a744\") " pod="calico-system/calico-kube-controllers-5bccffd987-jvbth" May 27 03:20:47.659806 kubelet[2715]: I0527 03:20:47.659469 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2xcb\" (UniqueName: \"kubernetes.io/projected/4e64304e-2717-4b6f-b26a-edc7ae5aa9fb-kube-api-access-w2xcb\") pod \"coredns-7c65d6cfc9-wc2hr\" (UID: \"4e64304e-2717-4b6f-b26a-edc7ae5aa9fb\") " pod="kube-system/coredns-7c65d6cfc9-wc2hr" May 27 03:20:47.659806 kubelet[2715]: I0527 03:20:47.659500 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mbq\" (UniqueName: \"kubernetes.io/projected/7f0983f5-57fc-4b64-bb4c-328c69709d91-kube-api-access-m5mbq\") pod \"calico-apiserver-f594547bd-ntfdc\" (UID: \"7f0983f5-57fc-4b64-bb4c-328c69709d91\") " pod="calico-apiserver/calico-apiserver-f594547bd-ntfdc" May 27 03:20:47.659806 kubelet[2715]: I0527 03:20:47.659517 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-ca-bundle\") pod \"whisker-76697885d7-g495m\" (UID: \"f9153df7-a023-4862-a363-9cdd3429f7b2\") " pod="calico-system/whisker-76697885d7-g495m" May 27 03:20:47.659806 kubelet[2715]: I0527 03:20:47.659531 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g24jz\" (UniqueName: \"kubernetes.io/projected/f9153df7-a023-4862-a363-9cdd3429f7b2-kube-api-access-g24jz\") pod \"whisker-76697885d7-g495m\" (UID: \"f9153df7-a023-4862-a363-9cdd3429f7b2\") " pod="calico-system/whisker-76697885d7-g495m" May 27 03:20:47.849475 containerd[1589]: time="2025-05-27T03:20:47.849419927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dgnq8,Uid:6171242c-12ea-4dcd-9745-2bb963d34861,Namespace:kube-system,Attempt:0,}" May 27 03:20:47.849672 containerd[1589]: time="2025-05-27T03:20:47.849609513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bccffd987-jvbth,Uid:7595602a-5d40-40fb-9abf-9275aca2a744,Namespace:calico-system,Attempt:0,}" May 27 03:20:47.880254 containerd[1589]: time="2025-05-27T03:20:47.880171899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-v8kd9,Uid:e490ce4e-3d5d-487a-9163-dd7539885ded,Namespace:calico-apiserver,Attempt:0,}" May 27 03:20:47.886516 containerd[1589]: time="2025-05-27T03:20:47.886459239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wc2hr,Uid:4e64304e-2717-4b6f-b26a-edc7ae5aa9fb,Namespace:kube-system,Attempt:0,}" May 27 03:20:47.892018 containerd[1589]: time="2025-05-27T03:20:47.891966442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-ntfdc,Uid:7f0983f5-57fc-4b64-bb4c-328c69709d91,Namespace:calico-apiserver,Attempt:0,}" May 27 03:20:47.897106 containerd[1589]: time="2025-05-27T03:20:47.896960461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-947mk,Uid:b4cfd04d-a56e-44a1-9a4a-e5cfed06478d,Namespace:calico-system,Attempt:0,}" May 27 03:20:47.902508 containerd[1589]: time="2025-05-27T03:20:47.902465590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76697885d7-g495m,Uid:f9153df7-a023-4862-a363-9cdd3429f7b2,Namespace:calico-system,Attempt:0,}" May 27 03:20:48.037408 containerd[1589]: time="2025-05-27T03:20:48.037336972Z" level=error msg="Failed to destroy network for sandbox \"fc94435f6cd981d54213611e7e07630a3c5b6561bafcae19fd1400676af1d19c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.060275 containerd[1589]: time="2025-05-27T03:20:48.060208873Z" level=error msg="Failed to destroy network for sandbox \"45a363f9f9cb5ac41fc96b7523b8411e14555b39fec21f326c2670ba9817fafa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.176909 containerd[1589]: time="2025-05-27T03:20:48.176781141Z" level=error msg="Failed to destroy network for sandbox \"0f59b4443b84e30635f4dfaf18d8c2240b7c89681f877472bf592be9196685bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.179271 containerd[1589]: time="2025-05-27T03:20:48.179213933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dgnq8,Uid:6171242c-12ea-4dcd-9745-2bb963d34861,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc94435f6cd981d54213611e7e07630a3c5b6561bafcae19fd1400676af1d19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.185416 kubelet[2715]: E0527 03:20:48.185353 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc94435f6cd981d54213611e7e07630a3c5b6561bafcae19fd1400676af1d19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.185802 kubelet[2715]: E0527 03:20:48.185449 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc94435f6cd981d54213611e7e07630a3c5b6561bafcae19fd1400676af1d19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dgnq8" May 27 03:20:48.185802 kubelet[2715]: E0527 03:20:48.185475 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc94435f6cd981d54213611e7e07630a3c5b6561bafcae19fd1400676af1d19c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dgnq8" May 27 03:20:48.185802 kubelet[2715]: E0527 03:20:48.185532 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dgnq8_kube-system(6171242c-12ea-4dcd-9745-2bb963d34861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dgnq8_kube-system(6171242c-12ea-4dcd-9745-2bb963d34861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc94435f6cd981d54213611e7e07630a3c5b6561bafcae19fd1400676af1d19c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dgnq8" podUID="6171242c-12ea-4dcd-9745-2bb963d34861" May 27 03:20:48.202098 containerd[1589]: time="2025-05-27T03:20:48.202039618Z" level=error msg="Failed to destroy network for sandbox \"2d8c2d4bc9a15d2e9825cd2b2e18cb51a9466ce98f4acc2e84afbc1cc0efb3c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.212641 containerd[1589]: time="2025-05-27T03:20:48.212574520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bccffd987-jvbth,Uid:7595602a-5d40-40fb-9abf-9275aca2a744,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a363f9f9cb5ac41fc96b7523b8411e14555b39fec21f326c2670ba9817fafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.212893 kubelet[2715]: E0527 03:20:48.212851 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a363f9f9cb5ac41fc96b7523b8411e14555b39fec21f326c2670ba9817fafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.212948 kubelet[2715]: E0527 03:20:48.212919 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a363f9f9cb5ac41fc96b7523b8411e14555b39fec21f326c2670ba9817fafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bccffd987-jvbth" May 27 03:20:48.212990 kubelet[2715]: E0527 03:20:48.212944 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a363f9f9cb5ac41fc96b7523b8411e14555b39fec21f326c2670ba9817fafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bccffd987-jvbth" May 27 03:20:48.213100 kubelet[2715]: E0527 03:20:48.213056 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bccffd987-jvbth_calico-system(7595602a-5d40-40fb-9abf-9275aca2a744)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bccffd987-jvbth_calico-system(7595602a-5d40-40fb-9abf-9275aca2a744)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45a363f9f9cb5ac41fc96b7523b8411e14555b39fec21f326c2670ba9817fafa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bccffd987-jvbth" podUID="7595602a-5d40-40fb-9abf-9275aca2a744" May 27 03:20:48.230435 containerd[1589]: time="2025-05-27T03:20:48.230358618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:20:48.240832 containerd[1589]: time="2025-05-27T03:20:48.239926232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-v8kd9,Uid:e490ce4e-3d5d-487a-9163-dd7539885ded,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f59b4443b84e30635f4dfaf18d8c2240b7c89681f877472bf592be9196685bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.241048 kubelet[2715]: E0527 03:20:48.240445 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f59b4443b84e30635f4dfaf18d8c2240b7c89681f877472bf592be9196685bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.241048 kubelet[2715]: E0527 03:20:48.240519 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f59b4443b84e30635f4dfaf18d8c2240b7c89681f877472bf592be9196685bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" May 27 03:20:48.241048 kubelet[2715]: E0527 03:20:48.240543 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f59b4443b84e30635f4dfaf18d8c2240b7c89681f877472bf592be9196685bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" May 27 03:20:48.241192 kubelet[2715]: E0527 03:20:48.240596 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f594547bd-v8kd9_calico-apiserver(e490ce4e-3d5d-487a-9163-dd7539885ded)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f594547bd-v8kd9_calico-apiserver(e490ce4e-3d5d-487a-9163-dd7539885ded)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f59b4443b84e30635f4dfaf18d8c2240b7c89681f877472bf592be9196685bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" podUID="e490ce4e-3d5d-487a-9163-dd7539885ded" May 27 03:20:48.245418 containerd[1589]: time="2025-05-27T03:20:48.245361289Z" level=error msg="Failed to destroy network for sandbox \"c27f74ca89af88a2117217decfcef139f9ab9d837275b0ac3601fc39c54e9a10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.249449 containerd[1589]: time="2025-05-27T03:20:48.249253674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wc2hr,Uid:4e64304e-2717-4b6f-b26a-edc7ae5aa9fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2d4bc9a15d2e9825cd2b2e18cb51a9466ce98f4acc2e84afbc1cc0efb3c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.249678 kubelet[2715]: E0527 03:20:48.249637 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2d4bc9a15d2e9825cd2b2e18cb51a9466ce98f4acc2e84afbc1cc0efb3c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.249794 kubelet[2715]: E0527 03:20:48.249697 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2d4bc9a15d2e9825cd2b2e18cb51a9466ce98f4acc2e84afbc1cc0efb3c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wc2hr" May 27 03:20:48.249794 kubelet[2715]: E0527 03:20:48.249721 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d8c2d4bc9a15d2e9825cd2b2e18cb51a9466ce98f4acc2e84afbc1cc0efb3c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wc2hr" May 27 03:20:48.249876 kubelet[2715]: E0527 03:20:48.249783 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wc2hr_kube-system(4e64304e-2717-4b6f-b26a-edc7ae5aa9fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wc2hr_kube-system(4e64304e-2717-4b6f-b26a-edc7ae5aa9fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d8c2d4bc9a15d2e9825cd2b2e18cb51a9466ce98f4acc2e84afbc1cc0efb3c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wc2hr" podUID="4e64304e-2717-4b6f-b26a-edc7ae5aa9fb" May 27 03:20:48.267896 containerd[1589]: time="2025-05-27T03:20:48.267757145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-ntfdc,Uid:7f0983f5-57fc-4b64-bb4c-328c69709d91,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27f74ca89af88a2117217decfcef139f9ab9d837275b0ac3601fc39c54e9a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.268164 kubelet[2715]: E0527 03:20:48.268093 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27f74ca89af88a2117217decfcef139f9ab9d837275b0ac3601fc39c54e9a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.268258 kubelet[2715]: E0527 03:20:48.268179 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27f74ca89af88a2117217decfcef139f9ab9d837275b0ac3601fc39c54e9a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f594547bd-ntfdc" May 27 03:20:48.268258 kubelet[2715]: E0527 03:20:48.268210 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27f74ca89af88a2117217decfcef139f9ab9d837275b0ac3601fc39c54e9a10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f594547bd-ntfdc" May 27 03:20:48.268369 kubelet[2715]: E0527 03:20:48.268258 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f594547bd-ntfdc_calico-apiserver(7f0983f5-57fc-4b64-bb4c-328c69709d91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f594547bd-ntfdc_calico-apiserver(7f0983f5-57fc-4b64-bb4c-328c69709d91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c27f74ca89af88a2117217decfcef139f9ab9d837275b0ac3601fc39c54e9a10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f594547bd-ntfdc" podUID="7f0983f5-57fc-4b64-bb4c-328c69709d91" May 27 03:20:48.279264 containerd[1589]: time="2025-05-27T03:20:48.279196477Z" level=error msg="Failed to destroy network for sandbox \"4c051c0ffb69d5295af6b5ece8f84026ac2fa52665728dac82ec0defc39840b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.294139 containerd[1589]: time="2025-05-27T03:20:48.294081286Z" level=error msg="Failed to destroy network for sandbox \"305f38ee6094540f13a32650e566153e862a3d474ccea7d50ff9af4094d45e3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.313361 containerd[1589]: time="2025-05-27T03:20:48.313295131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-947mk,Uid:b4cfd04d-a56e-44a1-9a4a-e5cfed06478d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c051c0ffb69d5295af6b5ece8f84026ac2fa52665728dac82ec0defc39840b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.313557 kubelet[2715]: E0527 03:20:48.313506 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c051c0ffb69d5295af6b5ece8f84026ac2fa52665728dac82ec0defc39840b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.313615 kubelet[2715]: E0527 03:20:48.313562 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c051c0ffb69d5295af6b5ece8f84026ac2fa52665728dac82ec0defc39840b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-947mk" May 27 03:20:48.313615 kubelet[2715]: E0527 03:20:48.313583 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c051c0ffb69d5295af6b5ece8f84026ac2fa52665728dac82ec0defc39840b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-947mk" May 27 03:20:48.313779 kubelet[2715]: E0527 03:20:48.313637 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-947mk_calico-system(b4cfd04d-a56e-44a1-9a4a-e5cfed06478d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-947mk_calico-system(b4cfd04d-a56e-44a1-9a4a-e5cfed06478d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c051c0ffb69d5295af6b5ece8f84026ac2fa52665728dac82ec0defc39840b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-947mk" podUID="b4cfd04d-a56e-44a1-9a4a-e5cfed06478d" May 27 03:20:48.344132 containerd[1589]: time="2025-05-27T03:20:48.344040583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76697885d7-g495m,Uid:f9153df7-a023-4862-a363-9cdd3429f7b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"305f38ee6094540f13a32650e566153e862a3d474ccea7d50ff9af4094d45e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.344620 kubelet[2715]: E0527 03:20:48.344377 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305f38ee6094540f13a32650e566153e862a3d474ccea7d50ff9af4094d45e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:48.344620 kubelet[2715]: E0527 03:20:48.344447 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305f38ee6094540f13a32650e566153e862a3d474ccea7d50ff9af4094d45e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76697885d7-g495m" May 27 03:20:48.344620 kubelet[2715]: E0527 03:20:48.344467 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305f38ee6094540f13a32650e566153e862a3d474ccea7d50ff9af4094d45e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76697885d7-g495m" May 27 03:20:48.344752 kubelet[2715]: E0527 03:20:48.344518 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76697885d7-g495m_calico-system(f9153df7-a023-4862-a363-9cdd3429f7b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76697885d7-g495m_calico-system(f9153df7-a023-4862-a363-9cdd3429f7b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"305f38ee6094540f13a32650e566153e862a3d474ccea7d50ff9af4094d45e3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76697885d7-g495m" podUID="f9153df7-a023-4862-a363-9cdd3429f7b2" May 27 03:20:49.129352 systemd[1]: Created slice kubepods-besteffort-poda9973ce5_5b08_44e5_b570_c0fc70d71d29.slice - libcontainer container kubepods-besteffort-poda9973ce5_5b08_44e5_b570_c0fc70d71d29.slice. May 27 03:20:49.132906 containerd[1589]: time="2025-05-27T03:20:49.132867279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6cj4,Uid:a9973ce5-5b08-44e5-b570-c0fc70d71d29,Namespace:calico-system,Attempt:0,}" May 27 03:20:49.438349 containerd[1589]: time="2025-05-27T03:20:49.438201604Z" level=error msg="Failed to destroy network for sandbox \"5910a7b22428a11026cce820f0a6daba35b20023a6fe1c44a5f0624bcf6e7aac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:49.440962 systemd[1]: run-netns-cni\x2dd574b169\x2dda35\x2d5cba\x2d42b7\x2d43d8d5b5e9cf.mount: Deactivated successfully. May 27 03:20:49.597951 containerd[1589]: time="2025-05-27T03:20:49.597865657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6cj4,Uid:a9973ce5-5b08-44e5-b570-c0fc70d71d29,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5910a7b22428a11026cce820f0a6daba35b20023a6fe1c44a5f0624bcf6e7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:49.598257 kubelet[2715]: E0527 03:20:49.598193 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5910a7b22428a11026cce820f0a6daba35b20023a6fe1c44a5f0624bcf6e7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:49.598660 kubelet[2715]: E0527 03:20:49.598264 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5910a7b22428a11026cce820f0a6daba35b20023a6fe1c44a5f0624bcf6e7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6cj4" May 27 03:20:49.598660 kubelet[2715]: E0527 03:20:49.598283 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5910a7b22428a11026cce820f0a6daba35b20023a6fe1c44a5f0624bcf6e7aac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k6cj4" May 27 03:20:49.598660 kubelet[2715]: E0527 03:20:49.598344 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k6cj4_calico-system(a9973ce5-5b08-44e5-b570-c0fc70d71d29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k6cj4_calico-system(a9973ce5-5b08-44e5-b570-c0fc70d71d29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5910a7b22428a11026cce820f0a6daba35b20023a6fe1c44a5f0624bcf6e7aac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k6cj4" podUID="a9973ce5-5b08-44e5-b570-c0fc70d71d29" May 27 03:20:54.129141 kubelet[2715]: I0527 03:20:54.129089 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:20:56.709144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432309411.mount: Deactivated successfully. May 27 03:20:58.597087 containerd[1589]: time="2025-05-27T03:20:58.597027238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:58.615852 containerd[1589]: time="2025-05-27T03:20:58.615780049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:20:58.672015 containerd[1589]: time="2025-05-27T03:20:58.671925519Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:58.730684 containerd[1589]: time="2025-05-27T03:20:58.730612741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:20:58.731363 containerd[1589]: time="2025-05-27T03:20:58.731317886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 10.500912128s" May 27 03:20:58.731585 containerd[1589]: time="2025-05-27T03:20:58.731467797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:20:58.740695 containerd[1589]: time="2025-05-27T03:20:58.740652532Z" level=info msg="CreateContainer within sandbox \"71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:20:59.033493 containerd[1589]: time="2025-05-27T03:20:59.033337969Z" level=info msg="Container d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411: CDI devices from CRI Config.CDIDevices: []" May 27 03:20:59.122890 containerd[1589]: time="2025-05-27T03:20:59.122808537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-v8kd9,Uid:e490ce4e-3d5d-487a-9163-dd7539885ded,Namespace:calico-apiserver,Attempt:0,}" May 27 03:20:59.190124 containerd[1589]: time="2025-05-27T03:20:59.190063527Z" level=info msg="CreateContainer within sandbox \"71cf9dc4b8fdad1008a01e2d1a08458a0509e2eb4e69a47be8b6398aac3e7e4f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411\"" May 27 03:20:59.190660 containerd[1589]: time="2025-05-27T03:20:59.190617077Z" level=info msg="StartContainer for \"d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411\"" May 27 03:20:59.192588 containerd[1589]: time="2025-05-27T03:20:59.192549054Z" level=info msg="connecting to shim d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411" address="unix:///run/containerd/s/422b5456e2db52d632ff893c4e1eb50d34c9eaf4c5893ac731eb7823dc65f2a0" protocol=ttrpc version=3 May 27 03:20:59.216281 systemd[1]: Started cri-containerd-d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411.scope - libcontainer container d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411. May 27 03:20:59.244441 containerd[1589]: time="2025-05-27T03:20:59.244358745Z" level=error msg="Failed to destroy network for sandbox \"1573e55efdbef9187e99614af7023627c3bef4868f2c6ce56fc2611462bfc41f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:59.246129 containerd[1589]: time="2025-05-27T03:20:59.246071951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-v8kd9,Uid:e490ce4e-3d5d-487a-9163-dd7539885ded,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1573e55efdbef9187e99614af7023627c3bef4868f2c6ce56fc2611462bfc41f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:59.246484 kubelet[2715]: E0527 03:20:59.246429 2715 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1573e55efdbef9187e99614af7023627c3bef4868f2c6ce56fc2611462bfc41f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:20:59.247908 kubelet[2715]: E0527 03:20:59.246510 2715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1573e55efdbef9187e99614af7023627c3bef4868f2c6ce56fc2611462bfc41f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" May 27 03:20:59.247908 kubelet[2715]: E0527 03:20:59.246533 2715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1573e55efdbef9187e99614af7023627c3bef4868f2c6ce56fc2611462bfc41f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" May 27 03:20:59.247908 kubelet[2715]: E0527 03:20:59.247782 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f594547bd-v8kd9_calico-apiserver(e490ce4e-3d5d-487a-9163-dd7539885ded)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f594547bd-v8kd9_calico-apiserver(e490ce4e-3d5d-487a-9163-dd7539885ded)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1573e55efdbef9187e99614af7023627c3bef4868f2c6ce56fc2611462bfc41f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" podUID="e490ce4e-3d5d-487a-9163-dd7539885ded" May 27 03:20:59.299678 containerd[1589]: time="2025-05-27T03:20:59.299288444Z" level=info msg="StartContainer for \"d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411\" returns successfully" May 27 03:20:59.373655 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:20:59.375114 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:20:59.638012 kubelet[2715]: I0527 03:20:59.637946 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-backend-key-pair\") pod \"f9153df7-a023-4862-a363-9cdd3429f7b2\" (UID: \"f9153df7-a023-4862-a363-9cdd3429f7b2\") " May 27 03:20:59.639563 kubelet[2715]: I0527 03:20:59.639521 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g24jz\" (UniqueName: \"kubernetes.io/projected/f9153df7-a023-4862-a363-9cdd3429f7b2-kube-api-access-g24jz\") pod \"f9153df7-a023-4862-a363-9cdd3429f7b2\" (UID: \"f9153df7-a023-4862-a363-9cdd3429f7b2\") " May 27 03:20:59.640069 kubelet[2715]: I0527 03:20:59.639765 2715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-ca-bundle\") pod \"f9153df7-a023-4862-a363-9cdd3429f7b2\" (UID: \"f9153df7-a023-4862-a363-9cdd3429f7b2\") " May 27 03:20:59.640999 kubelet[2715]: I0527 03:20:59.640954 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f9153df7-a023-4862-a363-9cdd3429f7b2" (UID: "f9153df7-a023-4862-a363-9cdd3429f7b2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 03:20:59.644088 kubelet[2715]: I0527 03:20:59.644057 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f9153df7-a023-4862-a363-9cdd3429f7b2" (UID: "f9153df7-a023-4862-a363-9cdd3429f7b2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 03:20:59.644913 kubelet[2715]: I0527 03:20:59.644896 2715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9153df7-a023-4862-a363-9cdd3429f7b2-kube-api-access-g24jz" (OuterVolumeSpecName: "kube-api-access-g24jz") pod "f9153df7-a023-4862-a363-9cdd3429f7b2" (UID: "f9153df7-a023-4862-a363-9cdd3429f7b2"). InnerVolumeSpecName "kube-api-access-g24jz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 03:20:59.738887 systemd[1]: run-netns-cni\x2da29fac10\x2d3304\x2d65e6\x2d3f2b\x2d23a4edfccd3c.mount: Deactivated successfully. May 27 03:20:59.739013 systemd[1]: var-lib-kubelet-pods-f9153df7\x2da023\x2d4862\x2da363\x2d9cdd3429f7b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg24jz.mount: Deactivated successfully. May 27 03:20:59.739101 systemd[1]: var-lib-kubelet-pods-f9153df7\x2da023\x2d4862\x2da363\x2d9cdd3429f7b2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:20:59.740392 kubelet[2715]: I0527 03:20:59.740347 2715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g24jz\" (UniqueName: \"kubernetes.io/projected/f9153df7-a023-4862-a363-9cdd3429f7b2-kube-api-access-g24jz\") on node \"localhost\" DevicePath \"\"" May 27 03:20:59.740392 kubelet[2715]: I0527 03:20:59.740387 2715 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 03:20:59.740537 kubelet[2715]: I0527 03:20:59.740400 2715 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9153df7-a023-4862-a363-9cdd3429f7b2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 03:20:59.755671 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:53396.service - OpenSSH per-connection server daemon (10.0.0.1:53396). May 27 03:20:59.822470 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 53396 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:20:59.824055 sshd-session[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:20:59.828658 systemd-logind[1573]: New session 8 of user core. May 27 03:20:59.839196 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:20:59.982760 sshd[3898]: Connection closed by 10.0.0.1 port 53396 May 27 03:20:59.983489 sshd-session[3896]: pam_unix(sshd:session): session closed for user core May 27 03:20:59.989417 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:53396.service: Deactivated successfully. May 27 03:20:59.992041 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:20:59.993232 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. May 27 03:20:59.994677 systemd-logind[1573]: Removed session 8. May 27 03:21:00.123186 containerd[1589]: time="2025-05-27T03:21:00.123105384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wc2hr,Uid:4e64304e-2717-4b6f-b26a-edc7ae5aa9fb,Namespace:kube-system,Attempt:0,}" May 27 03:21:00.372623 systemd[1]: Removed slice kubepods-besteffort-podf9153df7_a023_4862_a363_9cdd3429f7b2.slice - libcontainer container kubepods-besteffort-podf9153df7_a023_4862_a363_9cdd3429f7b2.slice. May 27 03:21:00.387741 kubelet[2715]: I0527 03:21:00.387191 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s6n2p" podStartSLOduration=3.421849083 podStartE2EDuration="27.387170806s" podCreationTimestamp="2025-05-27 03:20:33 +0000 UTC" firstStartedPulling="2025-05-27 03:20:34.76753968 +0000 UTC m=+22.731615371" lastFinishedPulling="2025-05-27 03:20:58.732861383 +0000 UTC m=+46.696937094" observedRunningTime="2025-05-27 03:21:00.386879339 +0000 UTC m=+48.350955060" watchObservedRunningTime="2025-05-27 03:21:00.387170806 +0000 UTC m=+48.351246507" May 27 03:21:00.465488 systemd[1]: Created slice kubepods-besteffort-podcbe83952_5db0_4c61_a4e6_4da765b5f113.slice - libcontainer container kubepods-besteffort-podcbe83952_5db0_4c61_a4e6_4da765b5f113.slice. May 27 03:21:00.546769 kubelet[2715]: I0527 03:21:00.546691 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbe83952-5db0-4c61-a4e6-4da765b5f113-whisker-backend-key-pair\") pod \"whisker-76cf64c5cf-xckzx\" (UID: \"cbe83952-5db0-4c61-a4e6-4da765b5f113\") " pod="calico-system/whisker-76cf64c5cf-xckzx" May 27 03:21:00.547289 kubelet[2715]: I0527 03:21:00.546879 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq9q8\" (UniqueName: \"kubernetes.io/projected/cbe83952-5db0-4c61-a4e6-4da765b5f113-kube-api-access-rq9q8\") pod \"whisker-76cf64c5cf-xckzx\" (UID: \"cbe83952-5db0-4c61-a4e6-4da765b5f113\") " pod="calico-system/whisker-76cf64c5cf-xckzx" May 27 03:21:00.547289 kubelet[2715]: I0527 03:21:00.547115 2715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe83952-5db0-4c61-a4e6-4da765b5f113-whisker-ca-bundle\") pod \"whisker-76cf64c5cf-xckzx\" (UID: \"cbe83952-5db0-4c61-a4e6-4da765b5f113\") " pod="calico-system/whisker-76cf64c5cf-xckzx" May 27 03:21:00.564551 containerd[1589]: time="2025-05-27T03:21:00.564500104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411\" id:\"1002d02800cd8b97d721d96dd1e1ffa3fd1044f17e56730ab497c5fc26a682fc\" pid:3947 exit_status:1 exited_at:{seconds:1748316060 nanos:563798938}" May 27 03:21:00.570832 systemd-networkd[1501]: cali455f2f3e27f: Link UP May 27 03:21:00.572715 systemd-networkd[1501]: cali455f2f3e27f: Gained carrier May 27 03:21:00.587052 containerd[1589]: 2025-05-27 03:21:00.398 [INFO][3916] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:21:00.587052 containerd[1589]: 2025-05-27 03:21:00.422 [INFO][3916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0 coredns-7c65d6cfc9- kube-system 4e64304e-2717-4b6f-b26a-edc7ae5aa9fb 843 0 2025-05-27 03:20:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wc2hr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali455f2f3e27f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-" May 27 03:21:00.587052 containerd[1589]: 2025-05-27 03:21:00.422 [INFO][3916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" May 27 03:21:00.587052 containerd[1589]: 2025-05-27 03:21:00.514 [INFO][3930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" HandleID="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Workload="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.514 [INFO][3930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" HandleID="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Workload="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000525410), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wc2hr", "timestamp":"2025-05-27 03:21:00.514033722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.514 [INFO][3930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.514 [INFO][3930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.514 [INFO][3930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.526 [INFO][3930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" host="localhost" May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.533 [INFO][3930] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.536 [INFO][3930] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.538 [INFO][3930] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.540 [INFO][3930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:00.587331 containerd[1589]: 2025-05-27 03:21:00.540 [INFO][3930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" host="localhost" May 27 03:21:00.587743 containerd[1589]: 2025-05-27 03:21:00.544 [INFO][3930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf May 27 03:21:00.587743 containerd[1589]: 2025-05-27 03:21:00.547 [INFO][3930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" host="localhost" May 27 03:21:00.587743 containerd[1589]: 2025-05-27 03:21:00.557 [INFO][3930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" host="localhost" May 27 03:21:00.587743 containerd[1589]: 2025-05-27 03:21:00.557 [INFO][3930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" host="localhost" May 27 03:21:00.587743 containerd[1589]: 2025-05-27 03:21:00.557 [INFO][3930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:00.587743 containerd[1589]: 2025-05-27 03:21:00.557 [INFO][3930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" HandleID="k8s-pod-network.4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Workload="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" May 27 03:21:00.587933 containerd[1589]: 2025-05-27 03:21:00.561 [INFO][3916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e64304e-2717-4b6f-b26a-edc7ae5aa9fb", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wc2hr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali455f2f3e27f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:00.588070 containerd[1589]: 2025-05-27 03:21:00.561 [INFO][3916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" May 27 03:21:00.588070 containerd[1589]: 2025-05-27 03:21:00.561 [INFO][3916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali455f2f3e27f ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" May 27 03:21:00.588070 containerd[1589]: 2025-05-27 03:21:00.572 [INFO][3916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" May 27 03:21:00.588171 containerd[1589]: 2025-05-27 03:21:00.572 [INFO][3916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4e64304e-2717-4b6f-b26a-edc7ae5aa9fb", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf", Pod:"coredns-7c65d6cfc9-wc2hr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali455f2f3e27f", MAC:"9a:80:dc:b7:f2:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:00.588171 containerd[1589]: 2025-05-27 03:21:00.583 [INFO][3916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wc2hr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wc2hr-eth0" May 27 03:21:00.774080 containerd[1589]: time="2025-05-27T03:21:00.773937591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cf64c5cf-xckzx,Uid:cbe83952-5db0-4c61-a4e6-4da765b5f113,Namespace:calico-system,Attempt:0,}" May 27 03:21:00.851045 containerd[1589]: time="2025-05-27T03:21:00.849121266Z" level=info msg="connecting to shim 4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf" address="unix:///run/containerd/s/a603ad9fe9155fee2424b959303f23e0e41bc8ea81fab5c7a93b9e8f61b39917" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:00.942527 systemd[1]: Started cri-containerd-4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf.scope - libcontainer container 4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf. May 27 03:21:00.971288 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:01.124642 containerd[1589]: time="2025-05-27T03:21:01.124590551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dgnq8,Uid:6171242c-12ea-4dcd-9745-2bb963d34861,Namespace:kube-system,Attempt:0,}" May 27 03:21:01.147232 containerd[1589]: time="2025-05-27T03:21:01.146944204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wc2hr,Uid:4e64304e-2717-4b6f-b26a-edc7ae5aa9fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf\"" May 27 03:21:01.152343 containerd[1589]: time="2025-05-27T03:21:01.152233603Z" level=info msg="CreateContainer within sandbox \"4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:21:01.159402 systemd-networkd[1501]: cali2b84958e945: Link UP May 27 03:21:01.159855 systemd-networkd[1501]: cali2b84958e945: Gained carrier May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.860 [INFO][4067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.895 [INFO][4067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76cf64c5cf--xckzx-eth0 whisker-76cf64c5cf- calico-system cbe83952-5db0-4c61-a4e6-4da765b5f113 950 0 2025-05-27 03:21:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76cf64c5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76cf64c5cf-xckzx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2b84958e945 [] [] }} ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.895 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.967 [INFO][4113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" HandleID="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Workload="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.970 [INFO][4113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" HandleID="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Workload="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76cf64c5cf-xckzx", "timestamp":"2025-05-27 03:21:00.967624236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.970 [INFO][4113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.970 [INFO][4113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.970 [INFO][4113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:00.979 [INFO][4113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.105 [INFO][4113] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.113 [INFO][4113] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.119 [INFO][4113] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.123 [INFO][4113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.124 [INFO][4113] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.127 [INFO][4113] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149 May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.138 [INFO][4113] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.145 [INFO][4113] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.145 [INFO][4113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" host="localhost" May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.146 [INFO][4113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:01.182117 containerd[1589]: 2025-05-27 03:21:01.146 [INFO][4113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" HandleID="k8s-pod-network.51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Workload="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" May 27 03:21:01.183660 containerd[1589]: 2025-05-27 03:21:01.154 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76cf64c5cf--xckzx-eth0", GenerateName:"whisker-76cf64c5cf-", Namespace:"calico-system", SelfLink:"", UID:"cbe83952-5db0-4c61-a4e6-4da765b5f113", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 21, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cf64c5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76cf64c5cf-xckzx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2b84958e945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:01.183660 containerd[1589]: 2025-05-27 03:21:01.154 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" May 27 03:21:01.183660 containerd[1589]: 2025-05-27 03:21:01.154 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b84958e945 ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" May 27 03:21:01.183660 containerd[1589]: 2025-05-27 03:21:01.162 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" May 27 03:21:01.183660 containerd[1589]: 2025-05-27 03:21:01.163 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76cf64c5cf--xckzx-eth0", GenerateName:"whisker-76cf64c5cf-", Namespace:"calico-system", SelfLink:"", UID:"cbe83952-5db0-4c61-a4e6-4da765b5f113", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 21, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cf64c5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149", Pod:"whisker-76cf64c5cf-xckzx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2b84958e945", MAC:"ea:33:35:5e:31:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:01.183660 containerd[1589]: 2025-05-27 03:21:01.177 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" Namespace="calico-system" Pod="whisker-76cf64c5cf-xckzx" WorkloadEndpoint="localhost-k8s-whisker--76cf64c5cf--xckzx-eth0" May 27 03:21:01.208583 containerd[1589]: time="2025-05-27T03:21:01.208505278Z" level=info msg="Container 46932df0fa8f6e8db3ca5d31aa9dbad81946df003b7ce8230cb9045be5c91f5f: CDI devices from CRI Config.CDIDevices: []" May 27 03:21:01.256953 containerd[1589]: time="2025-05-27T03:21:01.256876511Z" level=info msg="CreateContainer within sandbox \"4d43f3fe36821f5f401bfa7271f8d62364a0a5f46dee85454091281470527faf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46932df0fa8f6e8db3ca5d31aa9dbad81946df003b7ce8230cb9045be5c91f5f\"" May 27 03:21:01.258218 containerd[1589]: time="2025-05-27T03:21:01.258190357Z" level=info msg="StartContainer for \"46932df0fa8f6e8db3ca5d31aa9dbad81946df003b7ce8230cb9045be5c91f5f\"" May 27 03:21:01.263402 containerd[1589]: time="2025-05-27T03:21:01.263360492Z" level=info msg="connecting to shim 51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149" address="unix:///run/containerd/s/40136cd83e86ad125e1d8be635170a425da5e50b3405c1ef9484600566b7a352" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:01.281776 containerd[1589]: time="2025-05-27T03:21:01.281693227Z" level=info msg="connecting to shim 46932df0fa8f6e8db3ca5d31aa9dbad81946df003b7ce8230cb9045be5c91f5f" address="unix:///run/containerd/s/a603ad9fe9155fee2424b959303f23e0e41bc8ea81fab5c7a93b9e8f61b39917" protocol=ttrpc version=3 May 27 03:21:01.308279 systemd[1]: Started cri-containerd-51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149.scope - libcontainer container 51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149. May 27 03:21:01.313657 systemd[1]: Started cri-containerd-46932df0fa8f6e8db3ca5d31aa9dbad81946df003b7ce8230cb9045be5c91f5f.scope - libcontainer container 46932df0fa8f6e8db3ca5d31aa9dbad81946df003b7ce8230cb9045be5c91f5f. May 27 03:21:01.366637 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:01.371852 containerd[1589]: time="2025-05-27T03:21:01.371773658Z" level=info msg="StartContainer for \"46932df0fa8f6e8db3ca5d31aa9dbad81946df003b7ce8230cb9045be5c91f5f\" returns successfully" May 27 03:21:01.398372 systemd-networkd[1501]: vxlan.calico: Link UP May 27 03:21:01.398382 systemd-networkd[1501]: vxlan.calico: Gained carrier May 27 03:21:01.412824 systemd-networkd[1501]: cali9bf61054877: Link UP May 27 03:21:01.413245 systemd-networkd[1501]: cali9bf61054877: Gained carrier May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.240 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0 coredns-7c65d6cfc9- kube-system 6171242c-12ea-4dcd-9745-2bb963d34861 831 0 2025-05-27 03:20:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dgnq8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9bf61054877 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.241 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.346 [INFO][4202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" HandleID="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Workload="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.347 [INFO][4202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" HandleID="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Workload="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eab0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dgnq8", "timestamp":"2025-05-27 03:21:01.346722522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.349 [INFO][4202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.349 [INFO][4202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.349 [INFO][4202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.356 [INFO][4202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.364 [INFO][4202] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.374 [INFO][4202] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.377 [INFO][4202] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.380 [INFO][4202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.380 [INFO][4202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.382 [INFO][4202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184 May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.386 [INFO][4202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.400 [INFO][4202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.400 [INFO][4202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" host="localhost" May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.400 [INFO][4202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:01.441395 containerd[1589]: 2025-05-27 03:21:01.400 [INFO][4202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" HandleID="k8s-pod-network.4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Workload="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" May 27 03:21:01.442115 containerd[1589]: 2025-05-27 03:21:01.408 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6171242c-12ea-4dcd-9745-2bb963d34861", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dgnq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf61054877", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:01.442115 containerd[1589]: 2025-05-27 03:21:01.408 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" May 27 03:21:01.442115 containerd[1589]: 2025-05-27 03:21:01.408 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bf61054877 ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" May 27 03:21:01.442115 containerd[1589]: 2025-05-27 03:21:01.412 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" May 27 03:21:01.442115 containerd[1589]: 2025-05-27 03:21:01.419 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6171242c-12ea-4dcd-9745-2bb963d34861", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184", Pod:"coredns-7c65d6cfc9-dgnq8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bf61054877", MAC:"ea:34:8a:32:a1:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:01.442115 containerd[1589]: 2025-05-27 03:21:01.431 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dgnq8" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dgnq8-eth0" May 27 03:21:01.454009 containerd[1589]: time="2025-05-27T03:21:01.453876834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cf64c5cf-xckzx,Uid:cbe83952-5db0-4c61-a4e6-4da765b5f113,Namespace:calico-system,Attempt:0,} returns sandbox id \"51895bf0c7a5ec0ceef969a5a96ecf907a3787d0c70e7000abd895cb0a90c149\"" May 27 03:21:01.457909 containerd[1589]: time="2025-05-27T03:21:01.457879738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:21:01.490135 containerd[1589]: time="2025-05-27T03:21:01.489731772Z" level=info msg="connecting to shim 4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184" address="unix:///run/containerd/s/d8987abbe25ef51e1ec125bb43ac2153378f6a6ac9ce79a62c629f297257039b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:01.551296 containerd[1589]: time="2025-05-27T03:21:01.551256436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411\" id:\"cc28f3ad96151052f1be0091fa545bfabfbbb7f0098343b4f710536230967e43\" pid:4312 exit_status:1 exited_at:{seconds:1748316061 nanos:550852609}" May 27 03:21:01.554423 systemd[1]: Started cri-containerd-4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184.scope - libcontainer container 4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184. May 27 03:21:01.575276 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:01.614492 containerd[1589]: time="2025-05-27T03:21:01.614374292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dgnq8,Uid:6171242c-12ea-4dcd-9745-2bb963d34861,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184\"" May 27 03:21:01.618525 containerd[1589]: time="2025-05-27T03:21:01.618471043Z" level=info msg="CreateContainer within sandbox \"4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:21:01.635000 containerd[1589]: time="2025-05-27T03:21:01.634917878Z" level=info msg="Container 4ca4ad8b0ffe949da5237dfe2a81f828239323f7f26aa28d3f4291a8c0d8fd30: CDI devices from CRI Config.CDIDevices: []" May 27 03:21:01.647542 containerd[1589]: time="2025-05-27T03:21:01.647495408Z" level=info msg="CreateContainer within sandbox \"4dc57bee51a95e40b660e3d7dee6296a325c381ceec9a5f8d13243de0dce3184\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ca4ad8b0ffe949da5237dfe2a81f828239323f7f26aa28d3f4291a8c0d8fd30\"" May 27 03:21:01.648379 containerd[1589]: time="2025-05-27T03:21:01.648328132Z" level=info msg="StartContainer for \"4ca4ad8b0ffe949da5237dfe2a81f828239323f7f26aa28d3f4291a8c0d8fd30\"" May 27 03:21:01.650059 containerd[1589]: time="2025-05-27T03:21:01.649936191Z" level=info msg="connecting to shim 4ca4ad8b0ffe949da5237dfe2a81f828239323f7f26aa28d3f4291a8c0d8fd30" address="unix:///run/containerd/s/d8987abbe25ef51e1ec125bb43ac2153378f6a6ac9ce79a62c629f297257039b" protocol=ttrpc version=3 May 27 03:21:01.679129 systemd[1]: Started cri-containerd-4ca4ad8b0ffe949da5237dfe2a81f828239323f7f26aa28d3f4291a8c0d8fd30.scope - libcontainer container 4ca4ad8b0ffe949da5237dfe2a81f828239323f7f26aa28d3f4291a8c0d8fd30. May 27 03:21:01.713477 containerd[1589]: time="2025-05-27T03:21:01.713401809Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:01.715901 containerd[1589]: time="2025-05-27T03:21:01.715869051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:21:01.732561 containerd[1589]: time="2025-05-27T03:21:01.732402528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:01.732561 containerd[1589]: time="2025-05-27T03:21:01.732465837Z" level=info msg="StartContainer for \"4ca4ad8b0ffe949da5237dfe2a81f828239323f7f26aa28d3f4291a8c0d8fd30\" returns successfully" May 27 03:21:01.732961 kubelet[2715]: E0527 03:21:01.732903 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:01.733755 kubelet[2715]: E0527 03:21:01.733682 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:01.740038 kubelet[2715]: E0527 03:21:01.739447 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:08b7d48894224f42b8d11344c5939374,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rq9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cf64c5cf-xckzx_calico-system(cbe83952-5db0-4c61-a4e6-4da765b5f113): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:01.742555 containerd[1589]: time="2025-05-27T03:21:01.742501044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:21:01.786613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1627091391.mount: Deactivated successfully. May 27 03:21:02.031579 containerd[1589]: time="2025-05-27T03:21:02.031418232Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:02.045728 containerd[1589]: time="2025-05-27T03:21:02.045464669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:02.045972 kubelet[2715]: E0527 03:21:02.045785 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:02.045972 kubelet[2715]: E0527 03:21:02.045857 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:02.046213 kubelet[2715]: E0527 03:21:02.046014 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rq9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cf64c5cf-xckzx_calico-system(cbe83952-5db0-4c61-a4e6-4da765b5f113): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:02.047561 kubelet[2715]: E0527 03:21:02.047495 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cf64c5cf-xckzx" podUID="cbe83952-5db0-4c61-a4e6-4da765b5f113" May 27 03:21:02.055580 containerd[1589]: time="2025-05-27T03:21:02.045567372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:21:02.123107 containerd[1589]: time="2025-05-27T03:21:02.123063720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bccffd987-jvbth,Uid:7595602a-5d40-40fb-9abf-9275aca2a744,Namespace:calico-system,Attempt:0,}" May 27 03:21:02.123500 containerd[1589]: time="2025-05-27T03:21:02.123063760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6cj4,Uid:a9973ce5-5b08-44e5-b570-c0fc70d71d29,Namespace:calico-system,Attempt:0,}" May 27 03:21:02.123500 containerd[1589]: time="2025-05-27T03:21:02.123113754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-ntfdc,Uid:7f0983f5-57fc-4b64-bb4c-328c69709d91,Namespace:calico-apiserver,Attempt:0,}" May 27 03:21:02.124586 kubelet[2715]: I0527 03:21:02.124550 2715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9153df7-a023-4862-a363-9cdd3429f7b2" path="/var/lib/kubelet/pods/f9153df7-a023-4862-a363-9cdd3429f7b2/volumes" May 27 03:21:02.229157 systemd-networkd[1501]: cali2b84958e945: Gained IPv6LL May 27 03:21:02.257350 systemd-networkd[1501]: cali2339b098bab: Link UP May 27 03:21:02.257761 systemd-networkd[1501]: cali2339b098bab: Gained carrier May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.171 [INFO][4484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k6cj4-eth0 csi-node-driver- calico-system a9973ce5-5b08-44e5-b570-c0fc70d71d29 719 0 2025-05-27 03:20:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k6cj4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2339b098bab [] [] }} ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.171 [INFO][4484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-eth0" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.212 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" HandleID="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Workload="localhost-k8s-csi--node--driver--k6cj4-eth0" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.213 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" HandleID="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Workload="localhost-k8s-csi--node--driver--k6cj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k6cj4", "timestamp":"2025-05-27 03:21:02.212725995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.213 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.213 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.213 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.221 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.226 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.231 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.235 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.238 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.238 [INFO][4523] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.239 [INFO][4523] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220 May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.243 [INFO][4523] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.249 [INFO][4523] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.249 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" host="localhost" May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.249 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:02.275213 containerd[1589]: 2025-05-27 03:21:02.249 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" HandleID="k8s-pod-network.1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Workload="localhost-k8s-csi--node--driver--k6cj4-eth0" May 27 03:21:02.278195 containerd[1589]: 2025-05-27 03:21:02.252 [INFO][4484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k6cj4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9973ce5-5b08-44e5-b570-c0fc70d71d29", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k6cj4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2339b098bab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:02.278195 containerd[1589]: 2025-05-27 03:21:02.252 [INFO][4484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-eth0" May 27 03:21:02.278195 containerd[1589]: 2025-05-27 03:21:02.252 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2339b098bab ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-eth0" May 27 03:21:02.278195 containerd[1589]: 2025-05-27 03:21:02.257 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-eth0" May 27 03:21:02.278195 containerd[1589]: 2025-05-27 03:21:02.259 [INFO][4484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k6cj4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a9973ce5-5b08-44e5-b570-c0fc70d71d29", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220", Pod:"csi-node-driver-k6cj4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2339b098bab", MAC:"2a:bc:e9:d6:f3:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:02.278195 containerd[1589]: 2025-05-27 03:21:02.271 [INFO][4484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" Namespace="calico-system" Pod="csi-node-driver-k6cj4" WorkloadEndpoint="localhost-k8s-csi--node--driver--k6cj4-eth0" May 27 03:21:02.295117 systemd-networkd[1501]: cali455f2f3e27f: Gained IPv6LL May 27 03:21:02.327504 containerd[1589]: time="2025-05-27T03:21:02.327331222Z" level=info msg="connecting to shim 1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220" address="unix:///run/containerd/s/cd841850b8981a5b1bdfa4408f54450dab97f7139adfcc52f3717a474dbf3d1c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:02.367359 systemd[1]: Started cri-containerd-1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220.scope - libcontainer container 1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220. May 27 03:21:02.376865 systemd-networkd[1501]: cali15f4744af31: Link UP May 27 03:21:02.381837 systemd-networkd[1501]: cali15f4744af31: Gained carrier May 27 03:21:02.389463 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:02.398018 kubelet[2715]: E0527 03:21:02.396554 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cf64c5cf-xckzx" podUID="cbe83952-5db0-4c61-a4e6-4da765b5f113" May 27 03:21:02.408055 kubelet[2715]: I0527 03:21:02.407889 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dgnq8" podStartSLOduration=44.407865151 podStartE2EDuration="44.407865151s" podCreationTimestamp="2025-05-27 03:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:21:02.40312839 +0000 UTC m=+50.367204111" watchObservedRunningTime="2025-05-27 03:21:02.407865151 +0000 UTC m=+50.371940852" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.180 [INFO][4478] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0 calico-kube-controllers-5bccffd987- calico-system 7595602a-5d40-40fb-9abf-9275aca2a744 842 0 2025-05-27 03:20:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bccffd987 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bccffd987-jvbth eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali15f4744af31 [] [] }} ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.181 [INFO][4478] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.224 [INFO][4530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" HandleID="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Workload="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.224 [INFO][4530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" HandleID="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Workload="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bccffd987-jvbth", "timestamp":"2025-05-27 03:21:02.224478196 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.224 [INFO][4530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.249 [INFO][4530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.249 [INFO][4530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.324 [INFO][4530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.338 [INFO][4530] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.344 [INFO][4530] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.347 [INFO][4530] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.349 [INFO][4530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.350 [INFO][4530] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.351 [INFO][4530] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.357 [INFO][4530] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.366 [INFO][4530] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.366 [INFO][4530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" host="localhost" May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.366 [INFO][4530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:02.435375 containerd[1589]: 2025-05-27 03:21:02.366 [INFO][4530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" HandleID="k8s-pod-network.cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Workload="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" May 27 03:21:02.436055 containerd[1589]: 2025-05-27 03:21:02.372 [INFO][4478] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0", GenerateName:"calico-kube-controllers-5bccffd987-", Namespace:"calico-system", SelfLink:"", UID:"7595602a-5d40-40fb-9abf-9275aca2a744", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bccffd987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bccffd987-jvbth", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali15f4744af31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:02.436055 containerd[1589]: 2025-05-27 03:21:02.373 [INFO][4478] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" May 27 03:21:02.436055 containerd[1589]: 2025-05-27 03:21:02.373 [INFO][4478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15f4744af31 ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" May 27 03:21:02.436055 containerd[1589]: 2025-05-27 03:21:02.380 [INFO][4478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" May 27 03:21:02.436055 containerd[1589]: 2025-05-27 03:21:02.387 [INFO][4478] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0", GenerateName:"calico-kube-controllers-5bccffd987-", Namespace:"calico-system", SelfLink:"", UID:"7595602a-5d40-40fb-9abf-9275aca2a744", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bccffd987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b", Pod:"calico-kube-controllers-5bccffd987-jvbth", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali15f4744af31", MAC:"96:f3:fa:1e:b7:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:02.436055 containerd[1589]: 2025-05-27 03:21:02.420 [INFO][4478] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" Namespace="calico-system" Pod="calico-kube-controllers-5bccffd987-jvbth" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bccffd987--jvbth-eth0" May 27 03:21:02.437488 containerd[1589]: time="2025-05-27T03:21:02.437348275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6cj4,Uid:a9973ce5-5b08-44e5-b570-c0fc70d71d29,Namespace:calico-system,Attempt:0,} returns sandbox id \"1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220\"" May 27 03:21:02.442026 containerd[1589]: time="2025-05-27T03:21:02.441969209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:21:02.448685 kubelet[2715]: I0527 03:21:02.448538 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wc2hr" podStartSLOduration=44.448520667 podStartE2EDuration="44.448520667s" podCreationTimestamp="2025-05-27 03:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:21:02.447176613 +0000 UTC m=+50.411252314" watchObservedRunningTime="2025-05-27 03:21:02.448520667 +0000 UTC m=+50.412596368" May 27 03:21:02.482559 containerd[1589]: time="2025-05-27T03:21:02.482514566Z" level=info msg="connecting to shim cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b" address="unix:///run/containerd/s/39eab40c7fafcb59dd97fbba222ed55652a445244da5051e49e9bd10f07d4b2b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:02.504466 systemd-networkd[1501]: caliedcb7007130: Link UP May 27 03:21:02.505836 systemd-networkd[1501]: caliedcb7007130: Gained carrier May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.183 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0 calico-apiserver-f594547bd- calico-apiserver 7f0983f5-57fc-4b64-bb4c-328c69709d91 839 0 2025-05-27 03:20:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f594547bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f594547bd-ntfdc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliedcb7007130 [] [] }} ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.187 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.228 [INFO][4532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" HandleID="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Workload="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.228 [INFO][4532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" HandleID="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Workload="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f594547bd-ntfdc", "timestamp":"2025-05-27 03:21:02.228433711 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.228 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.366 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.366 [INFO][4532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.428 [INFO][4532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.448 [INFO][4532] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.461 [INFO][4532] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.468 [INFO][4532] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.479 [INFO][4532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.479 [INFO][4532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.481 [INFO][4532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2 May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.486 [INFO][4532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.495 [INFO][4532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.495 [INFO][4532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" host="localhost" May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.495 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:02.522788 containerd[1589]: 2025-05-27 03:21:02.495 [INFO][4532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" HandleID="k8s-pod-network.8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Workload="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" May 27 03:21:02.524707 containerd[1589]: 2025-05-27 03:21:02.499 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0", GenerateName:"calico-apiserver-f594547bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f0983f5-57fc-4b64-bb4c-328c69709d91", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f594547bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f594547bd-ntfdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedcb7007130", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:02.524707 containerd[1589]: 2025-05-27 03:21:02.499 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" May 27 03:21:02.524707 containerd[1589]: 2025-05-27 03:21:02.499 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedcb7007130 ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" May 27 03:21:02.524707 containerd[1589]: 2025-05-27 03:21:02.505 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" May 27 03:21:02.524707 containerd[1589]: 2025-05-27 03:21:02.506 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0", GenerateName:"calico-apiserver-f594547bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f0983f5-57fc-4b64-bb4c-328c69709d91", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f594547bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2", Pod:"calico-apiserver-f594547bd-ntfdc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliedcb7007130", MAC:"2e:76:dc:e8:b7:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:02.524707 containerd[1589]: 2025-05-27 03:21:02.519 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-ntfdc" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--ntfdc-eth0" May 27 03:21:02.524292 systemd[1]: Started cri-containerd-cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b.scope - libcontainer container cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b. May 27 03:21:02.542609 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:02.557333 containerd[1589]: time="2025-05-27T03:21:02.556924892Z" level=info msg="connecting to shim 8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2" address="unix:///run/containerd/s/a111096a7559c62f3fdd9a7fb14d0050ab1cf4a75f41a92671c11d07e9e6d0e5" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:02.585976 containerd[1589]: time="2025-05-27T03:21:02.585925980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bccffd987-jvbth,Uid:7595602a-5d40-40fb-9abf-9275aca2a744,Namespace:calico-system,Attempt:0,} returns sandbox id \"cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b\"" May 27 03:21:02.589160 systemd[1]: Started cri-containerd-8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2.scope - libcontainer container 8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2. May 27 03:21:02.609966 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:02.647284 containerd[1589]: time="2025-05-27T03:21:02.647211439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-ntfdc,Uid:7f0983f5-57fc-4b64-bb4c-328c69709d91,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2\"" May 27 03:21:02.933162 systemd-networkd[1501]: cali9bf61054877: Gained IPv6LL May 27 03:21:03.122619 containerd[1589]: time="2025-05-27T03:21:03.122567578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-947mk,Uid:b4cfd04d-a56e-44a1-9a4a-e5cfed06478d,Namespace:calico-system,Attempt:0,}" May 27 03:21:03.129237 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL May 27 03:21:03.307610 systemd-networkd[1501]: cali0917c808299: Link UP May 27 03:21:03.308567 systemd-networkd[1501]: cali0917c808299: Gained carrier May 27 03:21:03.318099 systemd-networkd[1501]: cali2339b098bab: Gained IPv6LL May 27 03:21:03.398720 kubelet[2715]: E0527 03:21:03.398672 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cf64c5cf-xckzx" podUID="cbe83952-5db0-4c61-a4e6-4da765b5f113" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.169 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--947mk-eth0 goldmane-8f77d7b6c- calico-system b4cfd04d-a56e-44a1-9a4a-e5cfed06478d 841 0 2025-05-27 03:20:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-947mk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0917c808299 [] [] }} ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.169 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.204 [INFO][4734] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" HandleID="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Workload="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.204 [INFO][4734] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" HandleID="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Workload="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004375c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-947mk", "timestamp":"2025-05-27 03:21:03.204672669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.204 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.204 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.205 [INFO][4734] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.211 [INFO][4734] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.215 [INFO][4734] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.219 [INFO][4734] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.222 [INFO][4734] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.225 [INFO][4734] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.225 [INFO][4734] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.226 [INFO][4734] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108 May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.245 [INFO][4734] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.301 [INFO][4734] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.301 [INFO][4734] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" host="localhost" May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.301 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:03.430557 containerd[1589]: 2025-05-27 03:21:03.301 [INFO][4734] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" HandleID="k8s-pod-network.06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Workload="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" May 27 03:21:03.431764 containerd[1589]: 2025-05-27 03:21:03.304 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--947mk-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"b4cfd04d-a56e-44a1-9a4a-e5cfed06478d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-947mk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0917c808299", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:03.431764 containerd[1589]: 2025-05-27 03:21:03.305 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" May 27 03:21:03.431764 containerd[1589]: 2025-05-27 03:21:03.305 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0917c808299 ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" May 27 03:21:03.431764 containerd[1589]: 2025-05-27 03:21:03.309 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" May 27 03:21:03.431764 containerd[1589]: 2025-05-27 03:21:03.310 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--947mk-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"b4cfd04d-a56e-44a1-9a4a-e5cfed06478d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108", Pod:"goldmane-8f77d7b6c-947mk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0917c808299", MAC:"2e:42:e1:14:42:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:03.431764 containerd[1589]: 2025-05-27 03:21:03.425 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" Namespace="calico-system" Pod="goldmane-8f77d7b6c-947mk" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--947mk-eth0" May 27 03:21:03.637318 systemd-networkd[1501]: cali15f4744af31: Gained IPv6LL May 27 03:21:03.732206 containerd[1589]: time="2025-05-27T03:21:03.732153748Z" level=info msg="connecting to shim 06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108" address="unix:///run/containerd/s/c7a723536baecb7867b4592343184dd6ec3164dbf5dd042706bfe4e5375648f4" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:03.766225 systemd[1]: Started cri-containerd-06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108.scope - libcontainer container 06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108. May 27 03:21:03.782559 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:03.880372 containerd[1589]: time="2025-05-27T03:21:03.880293485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-947mk,Uid:b4cfd04d-a56e-44a1-9a4a-e5cfed06478d,Namespace:calico-system,Attempt:0,} returns sandbox id \"06ab677df6ee9208b7f46df17983b5323967b2e931a85f8cadc53ae0f309e108\"" May 27 03:21:03.893224 systemd-networkd[1501]: caliedcb7007130: Gained IPv6LL May 27 03:21:04.493161 containerd[1589]: time="2025-05-27T03:21:04.493084550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:04.494894 containerd[1589]: time="2025-05-27T03:21:04.494846056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:21:04.497316 containerd[1589]: time="2025-05-27T03:21:04.497271277Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:04.499925 containerd[1589]: time="2025-05-27T03:21:04.499859405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:04.500455 containerd[1589]: time="2025-05-27T03:21:04.500418404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.05822216s" May 27 03:21:04.500455 containerd[1589]: time="2025-05-27T03:21:04.500448531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:21:04.501309 containerd[1589]: time="2025-05-27T03:21:04.501276444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:21:04.502815 containerd[1589]: time="2025-05-27T03:21:04.502742657Z" level=info msg="CreateContainer within sandbox \"1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:21:04.533234 containerd[1589]: time="2025-05-27T03:21:04.533179865Z" level=info msg="Container 89d84db72271856fee93b93d3f7191e024c484ede4b270ccb09ff5f5733d8bf7: CDI devices from CRI Config.CDIDevices: []" May 27 03:21:04.568197 containerd[1589]: time="2025-05-27T03:21:04.568144549Z" level=info msg="CreateContainer within sandbox \"1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"89d84db72271856fee93b93d3f7191e024c484ede4b270ccb09ff5f5733d8bf7\"" May 27 03:21:04.568734 containerd[1589]: time="2025-05-27T03:21:04.568690574Z" level=info msg="StartContainer for \"89d84db72271856fee93b93d3f7191e024c484ede4b270ccb09ff5f5733d8bf7\"" May 27 03:21:04.570565 containerd[1589]: time="2025-05-27T03:21:04.570524286Z" level=info msg="connecting to shim 89d84db72271856fee93b93d3f7191e024c484ede4b270ccb09ff5f5733d8bf7" address="unix:///run/containerd/s/cd841850b8981a5b1bdfa4408f54450dab97f7139adfcc52f3717a474dbf3d1c" protocol=ttrpc version=3 May 27 03:21:04.596157 systemd[1]: Started cri-containerd-89d84db72271856fee93b93d3f7191e024c484ede4b270ccb09ff5f5733d8bf7.scope - libcontainer container 89d84db72271856fee93b93d3f7191e024c484ede4b270ccb09ff5f5733d8bf7. May 27 03:21:04.711412 containerd[1589]: time="2025-05-27T03:21:04.711372583Z" level=info msg="StartContainer for \"89d84db72271856fee93b93d3f7191e024c484ede4b270ccb09ff5f5733d8bf7\" returns successfully" May 27 03:21:04.996288 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:56360.service - OpenSSH per-connection server daemon (10.0.0.1:56360). May 27 03:21:05.061756 sshd[4835]: Accepted publickey for core from 10.0.0.1 port 56360 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:05.064258 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:05.070880 systemd-logind[1573]: New session 9 of user core. May 27 03:21:05.077173 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:21:05.227266 sshd[4837]: Connection closed by 10.0.0.1 port 56360 May 27 03:21:05.227617 sshd-session[4835]: pam_unix(sshd:session): session closed for user core May 27 03:21:05.232924 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:56360.service: Deactivated successfully. May 27 03:21:05.235481 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:21:05.236708 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. May 27 03:21:05.238239 systemd-logind[1573]: Removed session 9. May 27 03:21:05.301251 systemd-networkd[1501]: cali0917c808299: Gained IPv6LL May 27 03:21:06.910953 containerd[1589]: time="2025-05-27T03:21:06.910900062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:06.946768 containerd[1589]: time="2025-05-27T03:21:06.946716789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:21:06.961155 containerd[1589]: time="2025-05-27T03:21:06.961102467Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:06.977880 containerd[1589]: time="2025-05-27T03:21:06.977833937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:06.978612 containerd[1589]: time="2025-05-27T03:21:06.978543248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.4772379s" May 27 03:21:06.978612 containerd[1589]: time="2025-05-27T03:21:06.978598852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:21:06.980118 containerd[1589]: time="2025-05-27T03:21:06.979543124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:21:06.987890 containerd[1589]: time="2025-05-27T03:21:06.987848852Z" level=info msg="CreateContainer within sandbox \"cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:21:07.112880 containerd[1589]: time="2025-05-27T03:21:07.112810900Z" level=info msg="Container 9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af: CDI devices from CRI Config.CDIDevices: []" May 27 03:21:07.201755 containerd[1589]: time="2025-05-27T03:21:07.201595194Z" level=info msg="CreateContainer within sandbox \"cdde2e95c046496b042f0d9e448b3d7c8b3ae98d4bf6edd2c64ed4a2c5e9af1b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af\"" May 27 03:21:07.202433 containerd[1589]: time="2025-05-27T03:21:07.202379285Z" level=info msg="StartContainer for \"9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af\"" May 27 03:21:07.203561 containerd[1589]: time="2025-05-27T03:21:07.203533872Z" level=info msg="connecting to shim 9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af" address="unix:///run/containerd/s/39eab40c7fafcb59dd97fbba222ed55652a445244da5051e49e9bd10f07d4b2b" protocol=ttrpc version=3 May 27 03:21:07.234235 systemd[1]: Started cri-containerd-9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af.scope - libcontainer container 9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af. May 27 03:21:07.301311 containerd[1589]: time="2025-05-27T03:21:07.301238556Z" level=info msg="StartContainer for \"9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af\" returns successfully" May 27 03:21:07.440072 kubelet[2715]: I0527 03:21:07.439565 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bccffd987-jvbth" podStartSLOduration=30.047651929 podStartE2EDuration="34.439542652s" podCreationTimestamp="2025-05-27 03:20:33 +0000 UTC" firstStartedPulling="2025-05-27 03:21:02.587412922 +0000 UTC m=+50.551488613" lastFinishedPulling="2025-05-27 03:21:06.979303635 +0000 UTC m=+54.943379336" observedRunningTime="2025-05-27 03:21:07.439539956 +0000 UTC m=+55.403615658" watchObservedRunningTime="2025-05-27 03:21:07.439542652 +0000 UTC m=+55.403618343" May 27 03:21:07.478524 containerd[1589]: time="2025-05-27T03:21:07.478116940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af\" id:\"36db2e1461022252af11aa781fa4935d575efa23ee50127e9eb46af94e1a40c4\" pid:4919 exited_at:{seconds:1748316067 nanos:477493120}" May 27 03:21:08.525729 containerd[1589]: time="2025-05-27T03:21:08.523828093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411\" id:\"4542168da1cc695a131d2cc21a1e1712663cd4bf645a649de0cc703cde46496d\" pid:4941 exited_at:{seconds:1748316068 nanos:517788851}" May 27 03:21:10.074008 containerd[1589]: time="2025-05-27T03:21:10.073933288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:10.074773 containerd[1589]: time="2025-05-27T03:21:10.074714604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:21:10.076431 containerd[1589]: time="2025-05-27T03:21:10.076394205Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:10.079186 containerd[1589]: time="2025-05-27T03:21:10.079132222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:10.079673 containerd[1589]: time="2025-05-27T03:21:10.079645125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.100056826s" May 27 03:21:10.079710 containerd[1589]: time="2025-05-27T03:21:10.079675322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:21:10.080673 containerd[1589]: time="2025-05-27T03:21:10.080507053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:21:10.081524 containerd[1589]: time="2025-05-27T03:21:10.081497411Z" level=info msg="CreateContainer within sandbox \"8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:21:10.091583 containerd[1589]: time="2025-05-27T03:21:10.091535997Z" level=info msg="Container 0e0fced9db95ebe76ff9e382e0dd688dfb691bad2bc224bd7a21683ede168731: CDI devices from CRI Config.CDIDevices: []" May 27 03:21:10.101542 containerd[1589]: time="2025-05-27T03:21:10.101381911Z" level=info msg="CreateContainer within sandbox \"8ab140f74dd4728b1c970735af3475e1972014529abd7871ca90daa9da8c00c2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0e0fced9db95ebe76ff9e382e0dd688dfb691bad2bc224bd7a21683ede168731\"" May 27 03:21:10.102206 containerd[1589]: time="2025-05-27T03:21:10.102160893Z" level=info msg="StartContainer for \"0e0fced9db95ebe76ff9e382e0dd688dfb691bad2bc224bd7a21683ede168731\"" May 27 03:21:10.103394 containerd[1589]: time="2025-05-27T03:21:10.103366516Z" level=info msg="connecting to shim 0e0fced9db95ebe76ff9e382e0dd688dfb691bad2bc224bd7a21683ede168731" address="unix:///run/containerd/s/a111096a7559c62f3fdd9a7fb14d0050ab1cf4a75f41a92671c11d07e9e6d0e5" protocol=ttrpc version=3 May 27 03:21:10.164352 systemd[1]: Started cri-containerd-0e0fced9db95ebe76ff9e382e0dd688dfb691bad2bc224bd7a21683ede168731.scope - libcontainer container 0e0fced9db95ebe76ff9e382e0dd688dfb691bad2bc224bd7a21683ede168731. May 27 03:21:10.240665 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:56362.service - OpenSSH per-connection server daemon (10.0.0.1:56362). May 27 03:21:10.297269 containerd[1589]: time="2025-05-27T03:21:10.297214111Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:10.397429 containerd[1589]: time="2025-05-27T03:21:10.397377337Z" level=info msg="StartContainer for \"0e0fced9db95ebe76ff9e382e0dd688dfb691bad2bc224bd7a21683ede168731\" returns successfully" May 27 03:21:10.408156 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 56362 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:10.408802 containerd[1589]: time="2025-05-27T03:21:10.408740499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:10.409053 containerd[1589]: time="2025-05-27T03:21:10.408881284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:21:10.409397 kubelet[2715]: E0527 03:21:10.409188 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:10.409397 kubelet[2715]: E0527 03:21:10.409346 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:10.410319 containerd[1589]: time="2025-05-27T03:21:10.410284406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:21:10.411393 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:10.413666 kubelet[2715]: E0527 03:21:10.413486 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp2px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-947mk_calico-system(b4cfd04d-a56e-44a1-9a4a-e5cfed06478d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:10.414749 kubelet[2715]: E0527 03:21:10.414686 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-947mk" podUID="b4cfd04d-a56e-44a1-9a4a-e5cfed06478d" May 27 03:21:10.416745 systemd-logind[1573]: New session 10 of user core. May 27 03:21:10.425297 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:21:10.427229 kubelet[2715]: E0527 03:21:10.427183 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-947mk" podUID="b4cfd04d-a56e-44a1-9a4a-e5cfed06478d" May 27 03:21:10.661257 sshd[5002]: Connection closed by 10.0.0.1 port 56362 May 27 03:21:10.661527 sshd-session[4994]: pam_unix(sshd:session): session closed for user core May 27 03:21:10.666816 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:56362.service: Deactivated successfully. May 27 03:21:10.668957 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:21:10.669785 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. May 27 03:21:10.671179 systemd-logind[1573]: Removed session 10. May 27 03:21:11.433419 kubelet[2715]: I0527 03:21:11.433371 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:21:13.172773 containerd[1589]: time="2025-05-27T03:21:13.172696862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:13.179169 containerd[1589]: time="2025-05-27T03:21:13.179133848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:21:13.184005 containerd[1589]: time="2025-05-27T03:21:13.183920419Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:13.201098 containerd[1589]: time="2025-05-27T03:21:13.201029770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:21:13.201680 containerd[1589]: time="2025-05-27T03:21:13.201639825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.791311787s" May 27 03:21:13.201680 containerd[1589]: time="2025-05-27T03:21:13.201669240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:21:13.239422 containerd[1589]: time="2025-05-27T03:21:13.239351921Z" level=info msg="CreateContainer within sandbox \"1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:21:13.296372 containerd[1589]: time="2025-05-27T03:21:13.296297413Z" level=info msg="Container cadeba43b8ee3c1ef190037c6acbbd024bd9a72a555658fcc6c8c6010a1c72b2: CDI devices from CRI Config.CDIDevices: []" May 27 03:21:13.309650 containerd[1589]: time="2025-05-27T03:21:13.309603507Z" level=info msg="CreateContainer within sandbox \"1304fe76a126becdbcbe5d8868ded50efa88fd7b40944061c0ce2346f63a8220\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cadeba43b8ee3c1ef190037c6acbbd024bd9a72a555658fcc6c8c6010a1c72b2\"" May 27 03:21:13.312575 containerd[1589]: time="2025-05-27T03:21:13.312517505Z" level=info msg="StartContainer for \"cadeba43b8ee3c1ef190037c6acbbd024bd9a72a555658fcc6c8c6010a1c72b2\"" May 27 03:21:13.314569 containerd[1589]: time="2025-05-27T03:21:13.314359891Z" level=info msg="connecting to shim cadeba43b8ee3c1ef190037c6acbbd024bd9a72a555658fcc6c8c6010a1c72b2" address="unix:///run/containerd/s/cd841850b8981a5b1bdfa4408f54450dab97f7139adfcc52f3717a474dbf3d1c" protocol=ttrpc version=3 May 27 03:21:13.338201 systemd[1]: Started cri-containerd-cadeba43b8ee3c1ef190037c6acbbd024bd9a72a555658fcc6c8c6010a1c72b2.scope - libcontainer container cadeba43b8ee3c1ef190037c6acbbd024bd9a72a555658fcc6c8c6010a1c72b2. May 27 03:21:13.468775 containerd[1589]: time="2025-05-27T03:21:13.468657798Z" level=info msg="StartContainer for \"cadeba43b8ee3c1ef190037c6acbbd024bd9a72a555658fcc6c8c6010a1c72b2\" returns successfully" May 27 03:21:14.122319 containerd[1589]: time="2025-05-27T03:21:14.122261122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-v8kd9,Uid:e490ce4e-3d5d-487a-9163-dd7539885ded,Namespace:calico-apiserver,Attempt:0,}" May 27 03:21:14.123230 containerd[1589]: time="2025-05-27T03:21:14.123183713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:21:14.201896 kubelet[2715]: I0527 03:21:14.201843 2715 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:21:14.201896 kubelet[2715]: I0527 03:21:14.201888 2715 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:21:14.498764 kubelet[2715]: I0527 03:21:14.498272 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f594547bd-ntfdc" podStartSLOduration=37.066302535 podStartE2EDuration="44.498255729s" podCreationTimestamp="2025-05-27 03:20:30 +0000 UTC" firstStartedPulling="2025-05-27 03:21:02.64845869 +0000 UTC m=+50.612534391" lastFinishedPulling="2025-05-27 03:21:10.080411884 +0000 UTC m=+58.044487585" observedRunningTime="2025-05-27 03:21:10.653723054 +0000 UTC m=+58.617798765" watchObservedRunningTime="2025-05-27 03:21:14.498255729 +0000 UTC m=+62.462331430" May 27 03:21:14.644365 containerd[1589]: time="2025-05-27T03:21:14.644295612Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:14.683926 containerd[1589]: time="2025-05-27T03:21:14.683862556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:21:14.684056 containerd[1589]: time="2025-05-27T03:21:14.683901530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:14.684221 kubelet[2715]: E0527 03:21:14.684154 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:14.684320 kubelet[2715]: E0527 03:21:14.684215 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:14.684420 kubelet[2715]: E0527 03:21:14.684362 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:08b7d48894224f42b8d11344c5939374,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rq9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cf64c5cf-xckzx_calico-system(cbe83952-5db0-4c61-a4e6-4da765b5f113): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:14.686320 containerd[1589]: time="2025-05-27T03:21:14.686287736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:21:14.800862 systemd-networkd[1501]: calid639bf60239: Link UP May 27 03:21:14.802468 systemd-networkd[1501]: calid639bf60239: Gained carrier May 27 03:21:14.820007 kubelet[2715]: I0527 03:21:14.818638 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k6cj4" podStartSLOduration=31.049586811 podStartE2EDuration="41.818619583s" podCreationTimestamp="2025-05-27 03:20:33 +0000 UTC" firstStartedPulling="2025-05-27 03:21:02.441377498 +0000 UTC m=+50.405453199" lastFinishedPulling="2025-05-27 03:21:13.21041027 +0000 UTC m=+61.174485971" observedRunningTime="2025-05-27 03:21:14.498882534 +0000 UTC m=+62.462958235" watchObservedRunningTime="2025-05-27 03:21:14.818619583 +0000 UTC m=+62.782695284" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.482 [INFO][5068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0 calico-apiserver-f594547bd- calico-apiserver e490ce4e-3d5d-487a-9163-dd7539885ded 834 0 2025-05-27 03:20:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f594547bd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f594547bd-v8kd9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid639bf60239 [] [] }} ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.483 [INFO][5068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.514 [INFO][5083] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" HandleID="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Workload="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.514 [INFO][5083] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" HandleID="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Workload="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f594547bd-v8kd9", "timestamp":"2025-05-27 03:21:14.514298487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.514 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.514 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.514 [INFO][5083] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.620 [INFO][5083] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.625 [INFO][5083] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.630 [INFO][5083] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.631 [INFO][5083] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.634 [INFO][5083] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.634 [INFO][5083] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.635 [INFO][5083] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9 May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.685 [INFO][5083] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.794 [INFO][5083] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.794 [INFO][5083] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" host="localhost" May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.794 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:21:14.825929 containerd[1589]: 2025-05-27 03:21:14.794 [INFO][5083] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" HandleID="k8s-pod-network.c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Workload="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" May 27 03:21:14.826671 containerd[1589]: 2025-05-27 03:21:14.798 [INFO][5068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0", GenerateName:"calico-apiserver-f594547bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e490ce4e-3d5d-487a-9163-dd7539885ded", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f594547bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f594547bd-v8kd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid639bf60239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:14.826671 containerd[1589]: 2025-05-27 03:21:14.798 [INFO][5068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" May 27 03:21:14.826671 containerd[1589]: 2025-05-27 03:21:14.798 [INFO][5068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid639bf60239 ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" May 27 03:21:14.826671 containerd[1589]: 2025-05-27 03:21:14.801 [INFO][5068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" May 27 03:21:14.826671 containerd[1589]: 2025-05-27 03:21:14.803 [INFO][5068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0", GenerateName:"calico-apiserver-f594547bd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e490ce4e-3d5d-487a-9163-dd7539885ded", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f594547bd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9", Pod:"calico-apiserver-f594547bd-v8kd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid639bf60239", MAC:"ee:65:c7:7a:44:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:21:14.826671 containerd[1589]: 2025-05-27 03:21:14.820 [INFO][5068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" Namespace="calico-apiserver" Pod="calico-apiserver-f594547bd-v8kd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--f594547bd--v8kd9-eth0" May 27 03:21:14.933547 containerd[1589]: time="2025-05-27T03:21:14.933449662Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:14.997247 containerd[1589]: time="2025-05-27T03:21:14.997116121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:14.997247 containerd[1589]: time="2025-05-27T03:21:14.997251425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:21:14.997569 kubelet[2715]: E0527 03:21:14.997434 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:14.997569 kubelet[2715]: E0527 03:21:14.997494 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:14.997726 kubelet[2715]: E0527 03:21:14.997647 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rq9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cf64c5cf-xckzx_calico-system(cbe83952-5db0-4c61-a4e6-4da765b5f113): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:14.998873 kubelet[2715]: E0527 03:21:14.998809 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cf64c5cf-xckzx" podUID="cbe83952-5db0-4c61-a4e6-4da765b5f113" May 27 03:21:15.173937 containerd[1589]: time="2025-05-27T03:21:15.173862758Z" level=info msg="connecting to shim c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9" address="unix:///run/containerd/s/d993f89b5aec3c5d118e4539311751d5270cedfca752cb65a57661071e980f87" namespace=k8s.io protocol=ttrpc version=3 May 27 03:21:15.208257 systemd[1]: Started cri-containerd-c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9.scope - libcontainer container c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9. May 27 03:21:15.224754 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:21:15.308612 containerd[1589]: time="2025-05-27T03:21:15.308552391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f594547bd-v8kd9,Uid:e490ce4e-3d5d-487a-9163-dd7539885ded,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9\"" May 27 03:21:15.311050 containerd[1589]: time="2025-05-27T03:21:15.311006014Z" level=info msg="CreateContainer within sandbox \"c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:21:15.528328 containerd[1589]: time="2025-05-27T03:21:15.528157545Z" level=info msg="Container 8bd02a92066b404b660c4752d0ddb69d70cd3b17ad98331c576baa11068789cd: CDI devices from CRI Config.CDIDevices: []" May 27 03:21:15.678724 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:34006.service - OpenSSH per-connection server daemon (10.0.0.1:34006). May 27 03:21:15.681001 containerd[1589]: time="2025-05-27T03:21:15.680923111Z" level=info msg="CreateContainer within sandbox \"c1dbb979718333d45ba5272fdc5dc3415d23016c7a48ac4c332cb001b49367b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8bd02a92066b404b660c4752d0ddb69d70cd3b17ad98331c576baa11068789cd\"" May 27 03:21:15.681586 containerd[1589]: time="2025-05-27T03:21:15.681524910Z" level=info msg="StartContainer for \"8bd02a92066b404b660c4752d0ddb69d70cd3b17ad98331c576baa11068789cd\"" May 27 03:21:15.682861 containerd[1589]: time="2025-05-27T03:21:15.682824228Z" level=info msg="connecting to shim 8bd02a92066b404b660c4752d0ddb69d70cd3b17ad98331c576baa11068789cd" address="unix:///run/containerd/s/d993f89b5aec3c5d118e4539311751d5270cedfca752cb65a57661071e980f87" protocol=ttrpc version=3 May 27 03:21:15.710261 systemd[1]: Started cri-containerd-8bd02a92066b404b660c4752d0ddb69d70cd3b17ad98331c576baa11068789cd.scope - libcontainer container 8bd02a92066b404b660c4752d0ddb69d70cd3b17ad98331c576baa11068789cd. May 27 03:21:15.740031 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 34006 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:15.742580 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:15.751127 systemd-logind[1573]: New session 11 of user core. May 27 03:21:15.756220 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:21:15.793587 containerd[1589]: time="2025-05-27T03:21:15.793409448Z" level=info msg="StartContainer for \"8bd02a92066b404b660c4752d0ddb69d70cd3b17ad98331c576baa11068789cd\" returns successfully" May 27 03:21:15.931935 sshd[5173]: Connection closed by 10.0.0.1 port 34006 May 27 03:21:15.932850 sshd-session[5149]: pam_unix(sshd:session): session closed for user core May 27 03:21:15.946911 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:34006.service: Deactivated successfully. May 27 03:21:15.949897 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:21:15.951352 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. May 27 03:21:15.956173 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:34020.service - OpenSSH per-connection server daemon (10.0.0.1:34020). May 27 03:21:15.957192 systemd-logind[1573]: Removed session 11. May 27 03:21:16.013934 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 34020 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:16.015320 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:16.020886 systemd-logind[1573]: New session 12 of user core. May 27 03:21:16.036603 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:21:16.223975 sshd[5204]: Connection closed by 10.0.0.1 port 34020 May 27 03:21:16.224549 sshd-session[5202]: pam_unix(sshd:session): session closed for user core May 27 03:21:16.235811 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:34020.service: Deactivated successfully. May 27 03:21:16.239466 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:21:16.240781 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. May 27 03:21:16.246539 systemd-logind[1573]: Removed session 12. May 27 03:21:16.248501 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:34022.service - OpenSSH per-connection server daemon (10.0.0.1:34022). May 27 03:21:16.305452 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 34022 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:16.307219 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:16.311923 systemd-logind[1573]: New session 13 of user core. May 27 03:21:16.321223 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:21:16.454568 sshd[5217]: Connection closed by 10.0.0.1 port 34022 May 27 03:21:16.454924 sshd-session[5215]: pam_unix(sshd:session): session closed for user core May 27 03:21:16.459458 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:34022.service: Deactivated successfully. May 27 03:21:16.461451 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:21:16.462382 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. May 27 03:21:16.463751 systemd-logind[1573]: Removed session 13. May 27 03:21:16.821327 systemd-networkd[1501]: calid639bf60239: Gained IPv6LL May 27 03:21:17.483637 kubelet[2715]: I0527 03:21:17.483601 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:21:17.901547 containerd[1589]: time="2025-05-27T03:21:17.901487896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af\" id:\"e22c357e552239ad9bf01ead11e71396af8824e19ef0ab15a7c9542f309e547b\" pid:5245 exited_at:{seconds:1748316077 nanos:900909469}" May 27 03:21:21.483170 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:34032.service - OpenSSH per-connection server daemon (10.0.0.1:34032). May 27 03:21:21.543368 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 34032 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:21.545085 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:21.549545 systemd-logind[1573]: New session 14 of user core. May 27 03:21:21.557171 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:21:21.711906 sshd[5267]: Connection closed by 10.0.0.1 port 34032 May 27 03:21:21.712364 sshd-session[5265]: pam_unix(sshd:session): session closed for user core May 27 03:21:21.717369 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:34032.service: Deactivated successfully. May 27 03:21:21.719292 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:21:21.720215 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. May 27 03:21:21.721565 systemd-logind[1573]: Removed session 14. May 27 03:21:24.122833 containerd[1589]: time="2025-05-27T03:21:24.122792824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:21:24.132953 kubelet[2715]: I0527 03:21:24.132888 2715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f594547bd-v8kd9" podStartSLOduration=54.132870227 podStartE2EDuration="54.132870227s" podCreationTimestamp="2025-05-27 03:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:21:16.495146983 +0000 UTC m=+64.459222704" watchObservedRunningTime="2025-05-27 03:21:24.132870227 +0000 UTC m=+72.096945928" May 27 03:21:24.342809 containerd[1589]: time="2025-05-27T03:21:24.342716097Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:24.344215 containerd[1589]: time="2025-05-27T03:21:24.344161954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:21:24.344286 containerd[1589]: time="2025-05-27T03:21:24.344172164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:24.344503 kubelet[2715]: E0527 03:21:24.344448 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:24.344578 kubelet[2715]: E0527 03:21:24.344508 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:24.344753 kubelet[2715]: E0527 03:21:24.344658 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp2px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-947mk_calico-system(b4cfd04d-a56e-44a1-9a4a-e5cfed06478d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:24.345863 kubelet[2715]: E0527 03:21:24.345827 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-947mk" podUID="b4cfd04d-a56e-44a1-9a4a-e5cfed06478d" May 27 03:21:26.181305 kubelet[2715]: I0527 03:21:26.181247 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:21:26.729249 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:32912.service - OpenSSH per-connection server daemon (10.0.0.1:32912). May 27 03:21:26.783101 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 32912 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:26.784954 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:26.789898 systemd-logind[1573]: New session 15 of user core. May 27 03:21:26.805166 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:21:26.938810 sshd[5293]: Connection closed by 10.0.0.1 port 32912 May 27 03:21:26.939146 sshd-session[5291]: pam_unix(sshd:session): session closed for user core May 27 03:21:26.943148 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:32912.service: Deactivated successfully. May 27 03:21:26.945074 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:21:26.945887 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. May 27 03:21:26.947246 systemd-logind[1573]: Removed session 15. May 27 03:21:28.124554 kubelet[2715]: E0527 03:21:28.124395 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cf64c5cf-xckzx" podUID="cbe83952-5db0-4c61-a4e6-4da765b5f113" May 27 03:21:31.957486 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:32926.service - OpenSSH per-connection server daemon (10.0.0.1:32926). May 27 03:21:32.032672 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 32926 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:32.034822 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:32.039913 systemd-logind[1573]: New session 16 of user core. May 27 03:21:32.050159 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:21:32.186629 sshd[5308]: Connection closed by 10.0.0.1 port 32926 May 27 03:21:32.187739 sshd-session[5306]: pam_unix(sshd:session): session closed for user core May 27 03:21:32.193036 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:32926.service: Deactivated successfully. May 27 03:21:32.195555 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:21:32.196893 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. May 27 03:21:32.198758 systemd-logind[1573]: Removed session 16. May 27 03:21:33.319118 containerd[1589]: time="2025-05-27T03:21:33.318964216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af\" id:\"abbc28cc236418aebbccf7220c3ba0f992a09abcadfe86b43830877b0b8c2f53\" pid:5334 exited_at:{seconds:1748316093 nanos:318626210}" May 27 03:21:36.123310 kubelet[2715]: E0527 03:21:36.123245 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-947mk" podUID="b4cfd04d-a56e-44a1-9a4a-e5cfed06478d" May 27 03:21:37.209032 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:55686.service - OpenSSH per-connection server daemon (10.0.0.1:55686). May 27 03:21:37.272644 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 55686 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:37.274366 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:37.279927 systemd-logind[1573]: New session 17 of user core. May 27 03:21:37.292149 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:21:37.432515 sshd[5350]: Connection closed by 10.0.0.1 port 55686 May 27 03:21:37.432922 sshd-session[5348]: pam_unix(sshd:session): session closed for user core May 27 03:21:37.441918 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:55686.service: Deactivated successfully. May 27 03:21:37.444036 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:21:37.444861 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. May 27 03:21:37.447914 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:55700.service - OpenSSH per-connection server daemon (10.0.0.1:55700). May 27 03:21:37.448536 systemd-logind[1573]: Removed session 17. May 27 03:21:37.499420 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 55700 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:37.501138 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:37.505800 systemd-logind[1573]: New session 18 of user core. May 27 03:21:37.515126 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:21:38.100252 sshd[5365]: Connection closed by 10.0.0.1 port 55700 May 27 03:21:38.100775 sshd-session[5363]: pam_unix(sshd:session): session closed for user core May 27 03:21:38.112141 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:55700.service: Deactivated successfully. May 27 03:21:38.114888 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:21:38.115871 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. May 27 03:21:38.120229 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:55708.service - OpenSSH per-connection server daemon (10.0.0.1:55708). May 27 03:21:38.121085 systemd-logind[1573]: Removed session 18. May 27 03:21:38.190782 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 55708 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:38.192741 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:38.198490 systemd-logind[1573]: New session 19 of user core. May 27 03:21:38.212165 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:21:38.516388 containerd[1589]: time="2025-05-27T03:21:38.516237518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5c9ab4c0711bc17786df033f6a868a01d4a8e0ac083f8f5649d883b4f437411\" id:\"7d45346ccbe16d78e9ddde8cc3677bb10fcf87d658aee65a78c545fde392787e\" pid:5397 exited_at:{seconds:1748316098 nanos:515852635}" May 27 03:21:40.377447 sshd[5378]: Connection closed by 10.0.0.1 port 55708 May 27 03:21:40.377835 sshd-session[5376]: pam_unix(sshd:session): session closed for user core May 27 03:21:40.399886 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:55708.service: Deactivated successfully. May 27 03:21:40.401171 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. May 27 03:21:40.404145 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:21:40.404482 systemd[1]: session-19.scope: Consumed 708ms CPU time, 75.1M memory peak. May 27 03:21:40.409431 systemd-logind[1573]: Removed session 19. May 27 03:21:40.418115 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:55712.service - OpenSSH per-connection server daemon (10.0.0.1:55712). May 27 03:21:40.489181 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 55712 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:40.491526 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:40.506239 systemd-logind[1573]: New session 20 of user core. May 27 03:21:40.511192 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:21:40.802417 sshd[5427]: Connection closed by 10.0.0.1 port 55712 May 27 03:21:40.804253 sshd-session[5425]: pam_unix(sshd:session): session closed for user core May 27 03:21:40.814130 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:55712.service: Deactivated successfully. May 27 03:21:40.818074 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:21:40.819345 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. May 27 03:21:40.824750 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:55722.service - OpenSSH per-connection server daemon (10.0.0.1:55722). May 27 03:21:40.826149 systemd-logind[1573]: Removed session 20. May 27 03:21:40.879718 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 55722 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:40.882141 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:40.889782 systemd-logind[1573]: New session 21 of user core. May 27 03:21:40.896374 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:21:41.008186 kubelet[2715]: I0527 03:21:41.008130 2715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:21:41.036187 sshd[5441]: Connection closed by 10.0.0.1 port 55722 May 27 03:21:41.036652 sshd-session[5439]: pam_unix(sshd:session): session closed for user core May 27 03:21:41.042277 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:55722.service: Deactivated successfully. May 27 03:21:41.047249 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:21:41.050241 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. May 27 03:21:41.051654 systemd-logind[1573]: Removed session 21. May 27 03:21:42.124686 containerd[1589]: time="2025-05-27T03:21:42.124637755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:21:42.408095 containerd[1589]: time="2025-05-27T03:21:42.407895185Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:42.410123 containerd[1589]: time="2025-05-27T03:21:42.410052571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:42.410792 containerd[1589]: time="2025-05-27T03:21:42.410157651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:21:42.410843 kubelet[2715]: E0527 03:21:42.410504 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:42.410843 kubelet[2715]: E0527 03:21:42.410573 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:21:42.410843 kubelet[2715]: E0527 03:21:42.410709 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:08b7d48894224f42b8d11344c5939374,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rq9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cf64c5cf-xckzx_calico-system(cbe83952-5db0-4c61-a4e6-4da765b5f113): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:42.413891 containerd[1589]: time="2025-05-27T03:21:42.413819051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:21:42.655203 containerd[1589]: time="2025-05-27T03:21:42.655123863Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:42.657920 containerd[1589]: time="2025-05-27T03:21:42.657830234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:42.658224 containerd[1589]: time="2025-05-27T03:21:42.658129564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:21:42.658611 kubelet[2715]: E0527 03:21:42.658301 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:42.658611 kubelet[2715]: E0527 03:21:42.658370 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:21:42.658611 kubelet[2715]: E0527 03:21:42.658511 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rq9q8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76cf64c5cf-xckzx_calico-system(cbe83952-5db0-4c61-a4e6-4da765b5f113): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:42.659993 kubelet[2715]: E0527 03:21:42.659923 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-76cf64c5cf-xckzx" podUID="cbe83952-5db0-4c61-a4e6-4da765b5f113" May 27 03:21:46.050330 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:52126.service - OpenSSH per-connection server daemon (10.0.0.1:52126). May 27 03:21:46.109027 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 52126 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:46.111336 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:46.117933 systemd-logind[1573]: New session 22 of user core. May 27 03:21:46.125207 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:21:46.252932 sshd[5464]: Connection closed by 10.0.0.1 port 52126 May 27 03:21:46.253362 sshd-session[5462]: pam_unix(sshd:session): session closed for user core May 27 03:21:46.259150 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:52126.service: Deactivated successfully. May 27 03:21:46.261801 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:21:46.262889 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. May 27 03:21:46.265232 systemd-logind[1573]: Removed session 22. May 27 03:21:47.941561 containerd[1589]: time="2025-05-27T03:21:47.941497141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9975a58dbdee3b015ec4a05c1e859f132c937748889be13b0ff04aa862cd43af\" id:\"5c5f1b998f2825103ab6a6802fcf3f7cd993156dab9d3b9c2807d134d9d38d39\" pid:5491 exited_at:{seconds:1748316107 nanos:941108532}" May 27 03:21:48.127422 containerd[1589]: time="2025-05-27T03:21:48.127377573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:21:48.360000 containerd[1589]: time="2025-05-27T03:21:48.359928181Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:21:48.364423 containerd[1589]: time="2025-05-27T03:21:48.364385479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:21:48.364516 containerd[1589]: time="2025-05-27T03:21:48.364432628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:21:48.364679 kubelet[2715]: E0527 03:21:48.364613 2715 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:48.365081 kubelet[2715]: E0527 03:21:48.364684 2715 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:21:48.365081 kubelet[2715]: E0527 03:21:48.364823 2715 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp2px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-947mk_calico-system(b4cfd04d-a56e-44a1-9a4a-e5cfed06478d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:21:48.366087 kubelet[2715]: E0527 03:21:48.366038 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-947mk" podUID="b4cfd04d-a56e-44a1-9a4a-e5cfed06478d" May 27 03:21:51.274115 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:52134.service - OpenSSH per-connection server daemon (10.0.0.1:52134). May 27 03:21:51.322414 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 52134 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:51.324423 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:51.330059 systemd-logind[1573]: New session 23 of user core. May 27 03:21:51.341284 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:21:51.484799 sshd[5509]: Connection closed by 10.0.0.1 port 52134 May 27 03:21:51.485291 sshd-session[5507]: pam_unix(sshd:session): session closed for user core May 27 03:21:51.493423 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:52134.service: Deactivated successfully. May 27 03:21:51.495999 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:21:51.499661 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. May 27 03:21:51.501550 systemd-logind[1573]: Removed session 23. May 27 03:21:56.497239 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:46414.service - OpenSSH per-connection server daemon (10.0.0.1:46414). May 27 03:21:56.557466 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 46414 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:21:56.559186 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:21:56.563568 systemd-logind[1573]: New session 24 of user core. May 27 03:21:56.569110 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:21:56.687490 sshd[5527]: Connection closed by 10.0.0.1 port 46414 May 27 03:21:56.687783 sshd-session[5525]: pam_unix(sshd:session): session closed for user core May 27 03:21:56.693283 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:46414.service: Deactivated successfully. May 27 03:21:56.695323 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:21:56.696097 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. May 27 03:21:56.697355 systemd-logind[1573]: Removed session 24. May 27 03:21:58.123827 kubelet[2715]: E0527 03:21:58.123753 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-76cf64c5cf-xckzx" podUID="cbe83952-5db0-4c61-a4e6-4da765b5f113" May 27 03:22:01.706603 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:46420.service - OpenSSH per-connection server daemon (10.0.0.1:46420). May 27 03:22:01.762440 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 46420 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:01.767461 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:01.773709 systemd-logind[1573]: New session 25 of user core. May 27 03:22:01.782242 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:22:01.940386 sshd[5542]: Connection closed by 10.0.0.1 port 46420 May 27 03:22:01.940788 sshd-session[5540]: pam_unix(sshd:session): session closed for user core May 27 03:22:01.945285 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:46420.service: Deactivated successfully. May 27 03:22:01.947309 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:22:01.948177 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. May 27 03:22:01.949436 systemd-logind[1573]: Removed session 25. May 27 03:22:02.125038 kubelet[2715]: E0527 03:22:02.124508 2715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-947mk" podUID="b4cfd04d-a56e-44a1-9a4a-e5cfed06478d"