May 27 03:22:39.879283 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:22:39.879314 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:39.879329 kernel: BIOS-provided physical RAM map: May 27 03:22:39.879338 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:22:39.879346 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:22:39.879355 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:22:39.879365 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:22:39.879374 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:22:39.879385 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:22:39.879394 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:22:39.879403 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 27 03:22:39.879411 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:22:39.879419 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:22:39.879428 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:22:39.879442 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:22:39.879451 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:22:39.879460 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:22:39.879470 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:22:39.879479 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:22:39.879488 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:22:39.879497 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:22:39.879506 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:22:39.879559 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:22:39.879566 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:22:39.879573 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:22:39.879583 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:22:39.879590 kernel: NX (Execute Disable) protection: active May 27 03:22:39.879597 kernel: APIC: Static calls initialized May 27 03:22:39.879604 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 27 03:22:39.879611 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 27 03:22:39.879618 kernel: extended physical RAM map: May 27 03:22:39.879625 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:22:39.879633 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:22:39.879640 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:22:39.879647 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:22:39.879655 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:22:39.879671 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:22:39.879678 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:22:39.879685 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 27 03:22:39.879692 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 27 03:22:39.879704 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 27 03:22:39.879711 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 27 03:22:39.879720 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 27 03:22:39.879728 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:22:39.879735 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:22:39.879743 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:22:39.879750 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:22:39.879758 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:22:39.879765 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:22:39.879772 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:22:39.879780 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:22:39.879790 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:22:39.879797 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:22:39.879804 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:22:39.879812 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:22:39.879819 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:22:39.879826 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:22:39.879833 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:22:39.879840 kernel: efi: EFI v2.7 by EDK II May 27 03:22:39.879848 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 27 03:22:39.879855 kernel: random: crng init done May 27 03:22:39.879863 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 27 03:22:39.879870 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 27 03:22:39.879879 kernel: secureboot: Secure boot disabled May 27 03:22:39.879887 kernel: SMBIOS 2.8 present. May 27 03:22:39.879894 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 03:22:39.879901 kernel: DMI: Memory slots populated: 1/1 May 27 03:22:39.879908 kernel: Hypervisor detected: KVM May 27 03:22:39.879916 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:22:39.879923 kernel: kvm-clock: using sched offset of 3866699636 cycles May 27 03:22:39.879931 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:22:39.879938 kernel: tsc: Detected 2794.748 MHz processor May 27 03:22:39.879946 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:22:39.879954 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:22:39.879963 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 27 03:22:39.879971 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:22:39.879978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:22:39.879985 kernel: Using GB pages for direct mapping May 27 03:22:39.879993 kernel: ACPI: Early table checksum verification disabled May 27 03:22:39.880000 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 27 03:22:39.880008 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 03:22:39.880015 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:22:39.880023 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:22:39.880032 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 27 03:22:39.880040 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:22:39.880047 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:22:39.880055 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:22:39.880062 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:22:39.880069 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 03:22:39.880077 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 27 03:22:39.880084 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] May 27 03:22:39.880093 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 27 03:22:39.880101 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 27 03:22:39.880108 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 27 03:22:39.880116 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 27 03:22:39.880123 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 27 03:22:39.880130 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 27 03:22:39.880137 kernel: No NUMA configuration found May 27 03:22:39.880145 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 27 03:22:39.880152 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 27 03:22:39.880160 kernel: Zone ranges: May 27 03:22:39.880169 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:22:39.880177 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 27 03:22:39.880184 kernel: Normal empty May 27 03:22:39.880191 kernel: Device empty May 27 03:22:39.880198 kernel: Movable zone start for each node May 27 03:22:39.880206 kernel: Early memory node ranges May 27 03:22:39.880213 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 03:22:39.880220 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 27 03:22:39.880228 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 27 03:22:39.880237 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 27 03:22:39.880244 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 27 03:22:39.880252 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 27 03:22:39.880259 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 27 03:22:39.880266 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 27 03:22:39.880275 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 27 03:22:39.880286 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:22:39.880296 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 03:22:39.880315 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 27 03:22:39.880323 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:22:39.880330 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 27 03:22:39.880338 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 27 03:22:39.880348 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 03:22:39.880356 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 03:22:39.880363 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 27 03:22:39.880371 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:22:39.880379 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:22:39.880388 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:22:39.880396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:22:39.880404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:22:39.880412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:22:39.880419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:22:39.880427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:22:39.880435 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:22:39.880443 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:22:39.880450 kernel: TSC deadline timer available May 27 03:22:39.880458 kernel: CPU topo: Max. logical packages: 1 May 27 03:22:39.880467 kernel: CPU topo: Max. logical dies: 1 May 27 03:22:39.880475 kernel: CPU topo: Max. dies per package: 1 May 27 03:22:39.880483 kernel: CPU topo: Max. threads per core: 1 May 27 03:22:39.880490 kernel: CPU topo: Num. cores per package: 4 May 27 03:22:39.880498 kernel: CPU topo: Num. threads per package: 4 May 27 03:22:39.880505 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:22:39.880580 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:22:39.880588 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:22:39.880595 kernel: kvm-guest: setup PV sched yield May 27 03:22:39.880606 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 03:22:39.880613 kernel: Booting paravirtualized kernel on KVM May 27 03:22:39.880621 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:22:39.880629 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:22:39.880637 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:22:39.880645 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:22:39.880653 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:22:39.880668 kernel: kvm-guest: PV spinlocks enabled May 27 03:22:39.880676 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:22:39.880687 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:39.880696 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:22:39.880704 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:22:39.880712 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:22:39.880720 kernel: Fallback order for Node 0: 0 May 27 03:22:39.880728 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 27 03:22:39.880735 kernel: Policy zone: DMA32 May 27 03:22:39.880743 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:22:39.880753 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:22:39.880761 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:22:39.880768 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:22:39.880776 kernel: Dynamic Preempt: voluntary May 27 03:22:39.880784 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:22:39.880792 kernel: rcu: RCU event tracing is enabled. May 27 03:22:39.880800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:22:39.880808 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:22:39.880816 kernel: Rude variant of Tasks RCU enabled. May 27 03:22:39.880826 kernel: Tracing variant of Tasks RCU enabled. May 27 03:22:39.880834 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:22:39.880841 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:22:39.880849 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:22:39.880857 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:22:39.880865 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:22:39.880873 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:22:39.880881 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:22:39.880888 kernel: Console: colour dummy device 80x25 May 27 03:22:39.880898 kernel: printk: legacy console [ttyS0] enabled May 27 03:22:39.880906 kernel: ACPI: Core revision 20240827 May 27 03:22:39.880914 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:22:39.880922 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:22:39.880929 kernel: x2apic enabled May 27 03:22:39.880937 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:22:39.880945 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:22:39.880953 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:22:39.880960 kernel: kvm-guest: setup PV IPIs May 27 03:22:39.880970 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:22:39.880978 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:22:39.880986 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:22:39.880994 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:22:39.881002 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:22:39.881009 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:22:39.881017 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:22:39.881025 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:22:39.881033 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:22:39.881043 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:22:39.881050 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:22:39.881058 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:22:39.881066 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:22:39.881074 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:22:39.881083 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:22:39.881090 kernel: x86/bugs: return thunk changed May 27 03:22:39.881098 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:22:39.881108 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:22:39.881116 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:22:39.881123 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:22:39.881131 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:22:39.881139 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:22:39.881146 kernel: Freeing SMP alternatives memory: 32K May 27 03:22:39.881154 kernel: pid_max: default: 32768 minimum: 301 May 27 03:22:39.881162 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:22:39.881169 kernel: landlock: Up and running. May 27 03:22:39.881179 kernel: SELinux: Initializing. May 27 03:22:39.881187 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:22:39.881200 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:22:39.881208 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:22:39.881216 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:22:39.881223 kernel: ... version: 0 May 27 03:22:39.881231 kernel: ... bit width: 48 May 27 03:22:39.881239 kernel: ... generic registers: 6 May 27 03:22:39.881246 kernel: ... value mask: 0000ffffffffffff May 27 03:22:39.881256 kernel: ... max period: 00007fffffffffff May 27 03:22:39.881264 kernel: ... fixed-purpose events: 0 May 27 03:22:39.881273 kernel: ... event mask: 000000000000003f May 27 03:22:39.881283 kernel: signal: max sigframe size: 1776 May 27 03:22:39.881293 kernel: rcu: Hierarchical SRCU implementation. May 27 03:22:39.881304 kernel: rcu: Max phase no-delay instances is 400. May 27 03:22:39.881314 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:22:39.881322 kernel: smp: Bringing up secondary CPUs ... May 27 03:22:39.881330 kernel: smpboot: x86: Booting SMP configuration: May 27 03:22:39.881338 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:22:39.881348 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:22:39.881356 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:22:39.881364 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137196K reserved, 0K cma-reserved) May 27 03:22:39.881372 kernel: devtmpfs: initialized May 27 03:22:39.881379 kernel: x86/mm: Memory block size: 128MB May 27 03:22:39.881387 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 27 03:22:39.881395 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 27 03:22:39.881403 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 27 03:22:39.881413 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 27 03:22:39.881420 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 27 03:22:39.881428 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 27 03:22:39.881436 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:22:39.881444 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:22:39.881452 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:22:39.881460 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:22:39.881467 kernel: audit: initializing netlink subsys (disabled) May 27 03:22:39.881486 kernel: audit: type=2000 audit(1748316157.245:1): state=initialized audit_enabled=0 res=1 May 27 03:22:39.881496 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:22:39.881504 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:22:39.881530 kernel: cpuidle: using governor menu May 27 03:22:39.881539 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:22:39.881554 kernel: dca service started, version 1.12.1 May 27 03:22:39.881563 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 03:22:39.881578 kernel: PCI: Using configuration type 1 for base access May 27 03:22:39.881586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:22:39.881602 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:22:39.881620 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:22:39.881635 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:22:39.881644 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:22:39.881651 kernel: ACPI: Added _OSI(Module Device) May 27 03:22:39.881665 kernel: ACPI: Added _OSI(Processor Device) May 27 03:22:39.881674 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:22:39.881681 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:22:39.881689 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:22:39.881697 kernel: ACPI: Interpreter enabled May 27 03:22:39.881707 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:22:39.881723 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:22:39.881731 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:22:39.881753 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:22:39.881768 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:22:39.881777 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:22:39.881960 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:22:39.882086 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:22:39.882221 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:22:39.882233 kernel: PCI host bridge to bus 0000:00 May 27 03:22:39.882366 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:22:39.882481 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:22:39.882619 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:22:39.882739 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 03:22:39.882844 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 03:22:39.882952 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 03:22:39.883075 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:22:39.883227 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:22:39.883416 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:22:39.883573 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 03:22:39.883701 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 03:22:39.883822 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:22:39.883942 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:22:39.884067 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:22:39.884183 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 03:22:39.884305 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 03:22:39.884424 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 03:22:39.884573 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:22:39.884708 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 03:22:39.884825 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 03:22:39.884962 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 03:22:39.885157 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:22:39.885343 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 03:22:39.885467 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 03:22:39.885603 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 03:22:39.885745 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 03:22:39.885872 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:22:39.885999 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:22:39.886149 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:22:39.886286 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 03:22:39.886428 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 03:22:39.886585 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:22:39.886717 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 03:22:39.886728 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:22:39.886736 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:22:39.886745 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:22:39.886753 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:22:39.886761 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:22:39.886768 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:22:39.886776 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:22:39.886788 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:22:39.886796 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:22:39.886804 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:22:39.886813 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:22:39.886828 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:22:39.886839 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:22:39.886848 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:22:39.886859 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:22:39.886868 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:22:39.886883 kernel: iommu: Default domain type: Translated May 27 03:22:39.886893 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:22:39.886904 kernel: efivars: Registered efivars operations May 27 03:22:39.886912 kernel: PCI: Using ACPI for IRQ routing May 27 03:22:39.886920 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:22:39.886928 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 27 03:22:39.886936 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 27 03:22:39.886955 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 27 03:22:39.886967 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 27 03:22:39.886980 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 27 03:22:39.886988 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 27 03:22:39.886995 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 27 03:22:39.887004 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 27 03:22:39.887141 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:22:39.887257 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:22:39.887395 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:22:39.887413 kernel: vgaarb: loaded May 27 03:22:39.887422 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:22:39.887430 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:22:39.887438 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:22:39.887446 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:22:39.887454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:22:39.887462 kernel: pnp: PnP ACPI init May 27 03:22:39.887642 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 03:22:39.887668 kernel: pnp: PnP ACPI: found 6 devices May 27 03:22:39.887679 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:22:39.887687 kernel: NET: Registered PF_INET protocol family May 27 03:22:39.887696 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:22:39.887704 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:22:39.887714 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:22:39.887722 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:22:39.887730 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:22:39.887738 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:22:39.887748 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:22:39.887756 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:22:39.887765 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:22:39.887773 kernel: NET: Registered PF_XDP protocol family May 27 03:22:39.887895 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 03:22:39.888012 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 03:22:39.888119 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:22:39.888252 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:22:39.888373 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:22:39.888480 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 03:22:39.888609 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 03:22:39.888743 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 03:22:39.888759 kernel: PCI: CLS 0 bytes, default 64 May 27 03:22:39.888771 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:22:39.888782 kernel: Initialise system trusted keyrings May 27 03:22:39.888790 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:22:39.888799 kernel: Key type asymmetric registered May 27 03:22:39.888811 kernel: Asymmetric key parser 'x509' registered May 27 03:22:39.888819 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:22:39.888827 kernel: io scheduler mq-deadline registered May 27 03:22:39.888835 kernel: io scheduler kyber registered May 27 03:22:39.888844 kernel: io scheduler bfq registered May 27 03:22:39.888852 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:22:39.888863 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:22:39.888871 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:22:39.888880 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:22:39.888888 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:22:39.888896 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:22:39.888904 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:22:39.888915 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:22:39.888923 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:22:39.889048 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:22:39.889064 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:22:39.889176 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:22:39.889289 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:22:39 UTC (1748316159) May 27 03:22:39.889405 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 03:22:39.889416 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:22:39.889424 kernel: efifb: probing for efifb May 27 03:22:39.889435 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 03:22:39.889450 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 03:22:39.889461 kernel: efifb: scrolling: redraw May 27 03:22:39.889470 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:22:39.889478 kernel: Console: switching to colour frame buffer device 160x50 May 27 03:22:39.889486 kernel: fb0: EFI VGA frame buffer device May 27 03:22:39.889495 kernel: pstore: Using crash dump compression: deflate May 27 03:22:39.889503 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:22:39.889511 kernel: NET: Registered PF_INET6 protocol family May 27 03:22:39.889538 kernel: Segment Routing with IPv6 May 27 03:22:39.889547 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:22:39.889558 kernel: NET: Registered PF_PACKET protocol family May 27 03:22:39.889566 kernel: Key type dns_resolver registered May 27 03:22:39.889574 kernel: IPI shorthand broadcast: enabled May 27 03:22:39.889582 kernel: sched_clock: Marking stable (2975003618, 171041652)->(3167839450, -21794180) May 27 03:22:39.889590 kernel: registered taskstats version 1 May 27 03:22:39.889598 kernel: Loading compiled-in X.509 certificates May 27 03:22:39.889607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:22:39.889615 kernel: Demotion targets for Node 0: null May 27 03:22:39.889623 kernel: Key type .fscrypt registered May 27 03:22:39.889633 kernel: Key type fscrypt-provisioning registered May 27 03:22:39.889641 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:22:39.889650 kernel: ima: Allocated hash algorithm: sha1 May 27 03:22:39.889665 kernel: ima: No architecture policies found May 27 03:22:39.889674 kernel: clk: Disabling unused clocks May 27 03:22:39.889682 kernel: Warning: unable to open an initial console. May 27 03:22:39.889690 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:22:39.889699 kernel: Write protecting the kernel read-only data: 24576k May 27 03:22:39.889709 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:22:39.889717 kernel: Run /init as init process May 27 03:22:39.889725 kernel: with arguments: May 27 03:22:39.889733 kernel: /init May 27 03:22:39.889741 kernel: with environment: May 27 03:22:39.889749 kernel: HOME=/ May 27 03:22:39.889757 kernel: TERM=linux May 27 03:22:39.889765 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:22:39.889775 systemd[1]: Successfully made /usr/ read-only. May 27 03:22:39.889788 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:22:39.889798 systemd[1]: Detected virtualization kvm. May 27 03:22:39.889806 systemd[1]: Detected architecture x86-64. May 27 03:22:39.889815 systemd[1]: Running in initrd. May 27 03:22:39.889823 systemd[1]: No hostname configured, using default hostname. May 27 03:22:39.889832 systemd[1]: Hostname set to . May 27 03:22:39.889840 systemd[1]: Initializing machine ID from VM UUID. May 27 03:22:39.889851 systemd[1]: Queued start job for default target initrd.target. May 27 03:22:39.889860 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:22:39.889869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:22:39.889878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:22:39.889887 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:22:39.889896 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:22:39.889905 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:22:39.889917 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:22:39.889926 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:22:39.889935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:22:39.889944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:22:39.889952 systemd[1]: Reached target paths.target - Path Units. May 27 03:22:39.889961 systemd[1]: Reached target slices.target - Slice Units. May 27 03:22:39.889969 systemd[1]: Reached target swap.target - Swaps. May 27 03:22:39.889978 systemd[1]: Reached target timers.target - Timer Units. May 27 03:22:39.889987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:22:39.889997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:22:39.890006 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:22:39.890015 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:22:39.890023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:22:39.890032 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:22:39.890040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:22:39.890049 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:22:39.890058 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:22:39.890068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:22:39.890077 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:22:39.890086 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:22:39.890095 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:22:39.890104 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:22:39.890112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:22:39.890121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:39.890130 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:22:39.890141 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:22:39.890150 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:22:39.890158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:22:39.890189 systemd-journald[220]: Collecting audit messages is disabled. May 27 03:22:39.890213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:39.890222 systemd-journald[220]: Journal started May 27 03:22:39.890241 systemd-journald[220]: Runtime Journal (/run/log/journal/97155c1c646c4404802099464a5130a5) is 6M, max 48.5M, 42.4M free. May 27 03:22:39.883574 systemd-modules-load[223]: Inserted module 'overlay' May 27 03:22:39.892550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:22:39.895573 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:22:39.896085 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:22:39.902207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:22:39.904635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:22:39.914067 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:22:39.916923 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:22:39.921314 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:22:39.920273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:22:39.924462 kernel: Bridge firewalling registered May 27 03:22:39.924506 systemd-modules-load[223]: Inserted module 'br_netfilter' May 27 03:22:39.925047 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:22:39.926676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:22:39.929910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:22:39.936691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:22:39.948554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:22:39.950418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:22:39.955200 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:22:39.998762 systemd-resolved[271]: Positive Trust Anchors: May 27 03:22:39.998778 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:22:39.998808 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:22:40.001293 systemd-resolved[271]: Defaulting to hostname 'linux'. May 27 03:22:40.002450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:22:40.009937 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:22:40.071570 kernel: SCSI subsystem initialized May 27 03:22:40.081546 kernel: Loading iSCSI transport class v2.0-870. May 27 03:22:40.092562 kernel: iscsi: registered transport (tcp) May 27 03:22:40.115852 kernel: iscsi: registered transport (qla4xxx) May 27 03:22:40.115946 kernel: QLogic iSCSI HBA Driver May 27 03:22:40.137854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:22:40.168449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:22:40.172783 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:22:40.231716 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:22:40.235758 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:22:40.299572 kernel: raid6: avx2x4 gen() 28694 MB/s May 27 03:22:40.316551 kernel: raid6: avx2x2 gen() 30199 MB/s May 27 03:22:40.333763 kernel: raid6: avx2x1 gen() 18606 MB/s May 27 03:22:40.333857 kernel: raid6: using algorithm avx2x2 gen() 30199 MB/s May 27 03:22:40.351810 kernel: raid6: .... xor() 17200 MB/s, rmw enabled May 27 03:22:40.351878 kernel: raid6: using avx2x2 recovery algorithm May 27 03:22:40.374563 kernel: xor: automatically using best checksumming function avx May 27 03:22:40.548557 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:22:40.556777 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:22:40.560137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:22:40.602244 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 27 03:22:40.608903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:22:40.612215 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:22:40.646806 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation May 27 03:22:40.678316 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:22:40.682232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:22:40.762145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:22:40.766188 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:22:40.803548 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:22:40.811290 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:22:40.817300 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:22:40.817340 kernel: GPT:9289727 != 19775487 May 27 03:22:40.817356 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:22:40.819495 kernel: GPT:9289727 != 19775487 May 27 03:22:40.819541 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:22:40.819557 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:22:40.826540 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:22:40.832542 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:22:40.836543 kernel: libata version 3.00 loaded. May 27 03:22:40.841433 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:22:40.841613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:40.845414 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:40.849552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:40.856625 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:22:40.856868 kernel: AES CTR mode by8 optimization enabled May 27 03:22:40.856880 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:22:40.859904 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:22:40.860071 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:22:40.860237 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:22:40.864358 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:22:40.871543 kernel: scsi host0: ahci May 27 03:22:40.882400 kernel: scsi host1: ahci May 27 03:22:40.888433 kernel: scsi host2: ahci May 27 03:22:40.888662 kernel: scsi host3: ahci May 27 03:22:40.891545 kernel: scsi host4: ahci May 27 03:22:40.893860 kernel: scsi host5: ahci May 27 03:22:40.894048 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 03:22:40.894061 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 03:22:40.896355 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 03:22:40.896385 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 03:22:40.896396 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 03:22:40.898804 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 03:22:40.901769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:22:40.920556 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:22:40.920694 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:22:40.935974 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:22:40.947771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:22:40.951101 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:22:40.952264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:22:40.952317 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:40.955689 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:40.962170 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:40.963644 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:22:40.973885 disk-uuid[636]: Primary Header is updated. May 27 03:22:40.973885 disk-uuid[636]: Secondary Entries is updated. May 27 03:22:40.973885 disk-uuid[636]: Secondary Header is updated. May 27 03:22:40.977566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:22:40.981568 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:22:40.986741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:41.202776 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:22:41.202868 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:22:41.202898 kernel: ata3.00: applying bridge limits May 27 03:22:41.203843 kernel: ata3.00: configured for UDMA/100 May 27 03:22:41.204542 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:22:41.211550 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:22:41.211577 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:22:41.211590 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:22:41.213543 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:22:41.213569 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:22:41.263559 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:22:41.263868 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:22:41.289563 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:22:41.621908 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:22:41.622707 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:22:41.625918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:22:41.628687 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:22:41.632604 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:22:41.671324 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:22:41.984567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:22:41.985177 disk-uuid[640]: The operation has completed successfully. May 27 03:22:42.020664 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:22:42.020808 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:22:42.062956 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:22:42.090757 sh[671]: Success May 27 03:22:42.108851 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:22:42.108930 kernel: device-mapper: uevent: version 1.0.3 May 27 03:22:42.108943 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:22:42.119542 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:22:42.152077 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:22:42.154250 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:22:42.176858 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:22:42.184889 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:22:42.184953 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (683) May 27 03:22:42.186509 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:22:42.186618 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:42.188539 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:22:42.193938 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:22:42.194580 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:22:42.197635 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:22:42.198791 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:22:42.199754 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:22:42.233121 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (716) May 27 03:22:42.233184 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:42.233199 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:42.234036 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:22:42.241539 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:42.242560 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:22:42.243973 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:22:42.321591 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:22:42.324128 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:22:42.414771 systemd-networkd[854]: lo: Link UP May 27 03:22:42.414782 systemd-networkd[854]: lo: Gained carrier May 27 03:22:42.416668 systemd-networkd[854]: Enumeration completed May 27 03:22:42.417103 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:42.417108 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:22:42.420023 ignition[757]: Ignition 2.21.0 May 27 03:22:42.417465 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:22:42.420030 ignition[757]: Stage: fetch-offline May 27 03:22:42.419116 systemd-networkd[854]: eth0: Link UP May 27 03:22:42.420062 ignition[757]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:42.419121 systemd-networkd[854]: eth0: Gained carrier May 27 03:22:42.420071 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:22:42.419131 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:42.420156 ignition[757]: parsed url from cmdline: "" May 27 03:22:42.419839 systemd[1]: Reached target network.target - Network. May 27 03:22:42.420160 ignition[757]: no config URL provided May 27 03:22:42.420165 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:22:42.436560 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:22:42.420174 ignition[757]: no config at "/usr/lib/ignition/user.ign" May 27 03:22:42.420197 ignition[757]: op(1): [started] loading QEMU firmware config module May 27 03:22:42.420202 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:22:42.432439 ignition[757]: op(1): [finished] loading QEMU firmware config module May 27 03:22:42.479367 ignition[757]: parsing config with SHA512: 37f700b1305e54fbd915c8d9815837aa3b293281a5167048258720fe1f1e03332062cd5defd9af70cd343bf9287b7ec49463d6572c5204b2387dda162fc50f54 May 27 03:22:42.484361 unknown[757]: fetched base config from "system" May 27 03:22:42.484372 unknown[757]: fetched user config from "qemu" May 27 03:22:42.484688 ignition[757]: fetch-offline: fetch-offline passed May 27 03:22:42.484738 ignition[757]: Ignition finished successfully May 27 03:22:42.487841 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:22:42.489588 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:22:42.490531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:22:42.535830 ignition[867]: Ignition 2.21.0 May 27 03:22:42.535844 ignition[867]: Stage: kargs May 27 03:22:42.536001 ignition[867]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:42.536011 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:22:42.539988 ignition[867]: kargs: kargs passed May 27 03:22:42.540847 ignition[867]: Ignition finished successfully May 27 03:22:42.544948 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:22:42.548054 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:22:42.579413 ignition[875]: Ignition 2.21.0 May 27 03:22:42.579425 ignition[875]: Stage: disks May 27 03:22:42.579732 ignition[875]: no configs at "/usr/lib/ignition/base.d" May 27 03:22:42.579743 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:22:42.582620 ignition[875]: disks: disks passed May 27 03:22:42.586874 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:22:42.582672 ignition[875]: Ignition finished successfully May 27 03:22:42.588261 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:22:42.590262 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:22:42.592397 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:22:42.592836 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:22:42.596646 systemd[1]: Reached target basic.target - Basic System. May 27 03:22:42.597968 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:22:42.634082 systemd-fsck[885]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:22:42.644111 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:22:42.647207 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:22:42.771547 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:22:42.772042 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:22:42.774344 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:22:42.777931 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:22:42.780598 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:22:42.782641 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:22:42.782689 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:22:42.782711 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:22:42.805203 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:22:42.808437 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:22:42.811634 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (894) May 27 03:22:42.813909 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:42.813928 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:42.813939 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:22:42.818678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:22:42.859204 initrd-setup-root[918]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:22:42.863760 initrd-setup-root[925]: cut: /sysroot/etc/group: No such file or directory May 27 03:22:42.868937 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:22:42.874222 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:22:42.986654 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:22:42.990455 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:22:42.993606 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:22:43.013550 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:43.028083 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:22:43.052366 ignition[1010]: INFO : Ignition 2.21.0 May 27 03:22:43.052366 ignition[1010]: INFO : Stage: mount May 27 03:22:43.054539 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:43.054539 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:22:43.054539 ignition[1010]: INFO : mount: mount passed May 27 03:22:43.054539 ignition[1010]: INFO : Ignition finished successfully May 27 03:22:43.060681 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:22:43.062834 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:22:43.183364 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:22:43.185331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:22:43.220593 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1022) May 27 03:22:43.220648 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:22:43.220660 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:22:43.222540 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:22:43.225906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:22:43.270529 ignition[1039]: INFO : Ignition 2.21.0 May 27 03:22:43.270529 ignition[1039]: INFO : Stage: files May 27 03:22:43.272555 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:43.272555 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:22:43.272555 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping May 27 03:22:43.277110 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:22:43.277110 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:22:43.280412 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:22:43.282377 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:22:43.284437 unknown[1039]: wrote ssh authorized keys file for user: core May 27 03:22:43.285711 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:22:43.288209 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:22:43.290202 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 03:22:43.371400 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:22:43.500483 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:22:43.508997 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:22:43.510811 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:22:43.512556 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:22:43.514432 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:22:43.516146 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:22:43.517927 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:22:43.519676 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:22:43.521487 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:22:43.637353 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:22:43.639388 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:22:43.641227 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:22:43.703132 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:22:43.703132 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:22:43.708018 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 03:22:43.907782 systemd-networkd[854]: eth0: Gained IPv6LL May 27 03:22:44.278886 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:22:44.826844 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:22:44.826844 ignition[1039]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:22:44.831678 ignition[1039]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:22:44.838311 ignition[1039]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:22:44.838311 ignition[1039]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:22:44.838311 ignition[1039]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:22:44.843226 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:22:44.843226 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:22:44.843226 ignition[1039]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:22:44.843226 ignition[1039]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:22:44.863712 ignition[1039]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:22:44.867929 ignition[1039]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:22:44.869888 ignition[1039]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:22:44.869888 ignition[1039]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 03:22:44.872933 ignition[1039]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:22:44.872933 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:22:44.872933 ignition[1039]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:22:44.872933 ignition[1039]: INFO : files: files passed May 27 03:22:44.872933 ignition[1039]: INFO : Ignition finished successfully May 27 03:22:44.877147 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:22:44.880613 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:22:44.882963 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:22:44.897731 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:22:44.897889 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:22:44.901206 initrd-setup-root-after-ignition[1068]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:22:44.904103 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:22:44.904103 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:22:44.907780 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:22:44.911116 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:22:44.912724 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:22:44.916086 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:22:44.970731 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:22:44.970927 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:22:44.972260 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:22:44.974452 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:22:44.975056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:22:44.978628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:22:45.008374 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:22:45.012693 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:22:45.048106 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:22:45.050792 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:22:45.053228 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:22:45.053460 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:22:45.053681 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:22:45.058397 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:22:45.059638 systemd[1]: Stopped target basic.target - Basic System. May 27 03:22:45.062453 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:22:45.063775 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:22:45.066089 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:22:45.067315 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:22:45.067825 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:22:45.068154 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:22:45.068555 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:22:45.069041 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:22:45.069356 systemd[1]: Stopped target swap.target - Swaps. May 27 03:22:45.069843 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:22:45.070052 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:22:45.083659 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:22:45.083865 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:22:45.085927 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:22:45.088107 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:22:45.091662 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:22:45.091818 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:22:45.095146 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:22:45.095267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:22:45.097708 systemd[1]: Stopped target paths.target - Path Units. May 27 03:22:45.099702 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:22:45.100630 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:22:45.100973 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:22:45.101479 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:22:45.107243 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:22:45.107343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:22:45.109091 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:22:45.109174 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:22:45.111007 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:22:45.111156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:22:45.113191 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:22:45.113292 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:22:45.117181 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:22:45.119275 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:22:45.119393 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:22:45.122909 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:22:45.127497 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:22:45.127666 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:22:45.129922 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:22:45.130025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:22:45.135394 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:22:45.138670 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:22:45.167244 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:22:45.177598 ignition[1094]: INFO : Ignition 2.21.0 May 27 03:22:45.177598 ignition[1094]: INFO : Stage: umount May 27 03:22:45.180308 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:22:45.180308 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:22:45.182797 ignition[1094]: INFO : umount: umount passed May 27 03:22:45.182797 ignition[1094]: INFO : Ignition finished successfully May 27 03:22:45.185744 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:22:45.185896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:22:45.188069 systemd[1]: Stopped target network.target - Network. May 27 03:22:45.189649 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:22:45.189709 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:22:45.191607 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:22:45.191664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:22:45.193712 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:22:45.194777 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:22:45.196045 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:22:45.196099 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:22:45.196555 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:22:45.197132 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:22:45.205303 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:22:45.205499 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:22:45.210128 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:22:45.210483 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:22:45.210563 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:22:45.215780 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:22:45.221346 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:22:45.221600 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:22:45.225952 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:22:45.226195 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:22:45.229692 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:22:45.229752 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:22:45.233089 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:22:45.233171 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:22:45.233259 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:22:45.235068 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:22:45.235122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:22:45.239383 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:22:45.239441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:22:45.239959 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:22:45.241361 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:22:45.272304 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:22:45.273669 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:22:45.275215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:22:45.275257 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:22:45.277321 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:22:45.277354 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:22:45.277741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:22:45.277787 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:22:45.280595 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:22:45.280644 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:22:45.284181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:22:45.284242 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:22:45.286569 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:22:45.289423 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:22:45.289488 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:22:45.295377 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:22:45.295437 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:22:45.298999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:22:45.299062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:45.303240 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:22:45.305739 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:22:45.314755 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:22:45.314893 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:22:45.318002 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:22:45.318153 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:22:45.320676 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:22:45.323247 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:22:45.323345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:22:45.327489 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:22:45.358857 systemd[1]: Switching root. May 27 03:22:45.403580 systemd-journald[220]: Journal stopped May 27 03:22:46.765962 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 27 03:22:46.766032 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:22:46.766045 kernel: SELinux: policy capability open_perms=1 May 27 03:22:46.766057 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:22:46.766071 kernel: SELinux: policy capability always_check_network=0 May 27 03:22:46.766082 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:22:46.766093 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:22:46.766104 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:22:46.766115 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:22:46.766126 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:22:46.766138 kernel: audit: type=1403 audit(1748316165.816:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:22:46.766150 systemd[1]: Successfully loaded SELinux policy in 61.257ms. May 27 03:22:46.766175 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.245ms. May 27 03:22:46.766190 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:22:46.766202 systemd[1]: Detected virtualization kvm. May 27 03:22:46.766214 systemd[1]: Detected architecture x86-64. May 27 03:22:46.766230 systemd[1]: Detected first boot. May 27 03:22:46.766252 systemd[1]: Initializing machine ID from VM UUID. May 27 03:22:46.766265 zram_generator::config[1140]: No configuration found. May 27 03:22:46.766283 kernel: Guest personality initialized and is inactive May 27 03:22:46.766294 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:22:46.766307 kernel: Initialized host personality May 27 03:22:46.766318 kernel: NET: Registered PF_VSOCK protocol family May 27 03:22:46.766329 systemd[1]: Populated /etc with preset unit settings. May 27 03:22:46.766342 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:22:46.766354 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:22:46.766366 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:22:46.766378 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:22:46.766390 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:22:46.766402 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:22:46.766416 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:22:46.766433 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:22:46.766450 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:22:46.766463 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:22:46.766475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:22:46.766487 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:22:46.766506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:22:46.766535 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:22:46.766562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:22:46.766582 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:22:46.766595 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:22:46.766608 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:22:46.766620 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:22:46.766633 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:22:46.766644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:22:46.766656 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:22:46.766670 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:22:46.766682 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:22:46.766694 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:22:46.766706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:22:46.766718 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:22:46.766730 systemd[1]: Reached target slices.target - Slice Units. May 27 03:22:46.766742 systemd[1]: Reached target swap.target - Swaps. May 27 03:22:46.766753 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:22:46.766765 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:22:46.766779 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:22:46.766791 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:22:46.766803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:22:46.766815 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:22:46.766847 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:22:46.766859 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:22:46.766879 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:22:46.766892 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:22:46.766904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:46.766919 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:22:46.766930 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:22:46.766942 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:22:46.766955 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:22:46.766966 systemd[1]: Reached target machines.target - Containers. May 27 03:22:46.766978 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:22:46.766990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:22:46.767001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:22:46.767013 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:22:46.767028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:22:46.767040 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:22:46.767052 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:22:46.767063 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:22:46.767075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:22:46.767088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:22:46.767099 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:22:46.767119 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:22:46.767134 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:22:46.767146 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:22:46.767158 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:22:46.767170 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:22:46.767182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:22:46.767196 kernel: fuse: init (API version 7.41) May 27 03:22:46.767208 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:22:46.767223 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:22:46.767237 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:22:46.767251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:22:46.767263 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:22:46.767276 systemd[1]: Stopped verity-setup.service. May 27 03:22:46.767288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:46.767302 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:22:46.767316 kernel: loop: module loaded May 27 03:22:46.767327 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:22:46.767339 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:22:46.767351 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:22:46.767363 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:22:46.767378 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:22:46.767396 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:22:46.767408 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:22:46.767420 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:22:46.767432 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:22:46.767466 systemd-journald[1215]: Collecting audit messages is disabled. May 27 03:22:46.767498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:22:46.767511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:22:46.767574 kernel: ACPI: bus type drm_connector registered May 27 03:22:46.767586 systemd-journald[1215]: Journal started May 27 03:22:46.767608 systemd-journald[1215]: Runtime Journal (/run/log/journal/97155c1c646c4404802099464a5130a5) is 6M, max 48.5M, 42.4M free. May 27 03:22:46.398667 systemd[1]: Queued start job for default target multi-user.target. May 27 03:22:46.424689 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:22:46.425178 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:22:46.771139 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:22:46.772157 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:22:46.772392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:22:46.773962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:22:46.774232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:22:46.775932 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:22:46.776190 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:22:46.777717 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:22:46.777967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:22:46.779666 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:22:46.781240 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:22:46.782845 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:22:46.784756 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:22:46.804210 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:22:46.807335 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:22:46.809819 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:22:46.811090 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:22:46.811182 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:22:46.813609 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:22:46.825579 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:22:46.828725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:22:46.830282 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:22:46.833121 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:22:46.834444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:22:46.835751 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:22:46.837013 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:22:46.839624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:22:46.842813 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:22:46.845748 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:22:46.850847 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:22:46.852826 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:22:46.854431 systemd-journald[1215]: Time spent on flushing to /var/log/journal/97155c1c646c4404802099464a5130a5 is 41.727ms for 1067 entries. May 27 03:22:46.854431 systemd-journald[1215]: System Journal (/var/log/journal/97155c1c646c4404802099464a5130a5) is 8M, max 195.6M, 187.6M free. May 27 03:22:46.913371 systemd-journald[1215]: Received client request to flush runtime journal. May 27 03:22:46.913423 kernel: loop0: detected capacity change from 0 to 146240 May 27 03:22:46.869744 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:22:46.871382 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:22:46.874093 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:22:46.875741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:22:46.878069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:22:46.918004 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:22:46.921252 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:22:46.927550 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:22:46.933464 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:22:46.937727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:22:46.942560 kernel: loop1: detected capacity change from 0 to 229808 May 27 03:22:46.965543 kernel: loop2: detected capacity change from 0 to 113872 May 27 03:22:46.970423 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 27 03:22:46.970439 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. May 27 03:22:46.976995 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:22:46.994547 kernel: loop3: detected capacity change from 0 to 146240 May 27 03:22:47.011538 kernel: loop4: detected capacity change from 0 to 229808 May 27 03:22:47.018547 kernel: loop5: detected capacity change from 0 to 113872 May 27 03:22:47.023047 (sd-merge)[1281]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:22:47.023648 (sd-merge)[1281]: Merged extensions into '/usr'. May 27 03:22:47.078640 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:22:47.078663 systemd[1]: Reloading... May 27 03:22:47.160669 zram_generator::config[1309]: No configuration found. May 27 03:22:47.268239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:22:47.348133 ldconfig[1254]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:22:47.357605 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:22:47.357825 systemd[1]: Reloading finished in 278 ms. May 27 03:22:47.422549 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:22:47.424213 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:22:47.464878 systemd[1]: Starting ensure-sysext.service... May 27 03:22:47.467723 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:22:47.491802 systemd[1]: Reload requested from client PID 1344 ('systemctl') (unit ensure-sysext.service)... May 27 03:22:47.491823 systemd[1]: Reloading... May 27 03:22:47.533214 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:22:47.533255 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:22:47.535585 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:22:47.535889 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:22:47.537141 systemd-tmpfiles[1345]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:22:47.537633 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. May 27 03:22:47.537827 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. May 27 03:22:47.546331 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:22:47.546484 systemd-tmpfiles[1345]: Skipping /boot May 27 03:22:47.568168 systemd-tmpfiles[1345]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:22:47.568261 systemd-tmpfiles[1345]: Skipping /boot May 27 03:22:47.571552 zram_generator::config[1378]: No configuration found. May 27 03:22:47.693477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:22:47.779991 systemd[1]: Reloading finished in 287 ms. May 27 03:22:47.824777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:22:47.840126 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:22:47.875931 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:22:47.888417 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:22:47.891852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:22:47.893982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:22:47.897243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:47.897547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:22:47.947442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:22:47.952589 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:22:47.955085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:22:47.957858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:22:47.958022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:22:47.958162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:47.960336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:22:47.960754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:22:47.964121 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:22:47.964726 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:22:47.970816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:22:47.971037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:22:47.973161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:22:47.976218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:22:47.985684 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:22:47.988078 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:22:47.995051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:47.995275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:22:47.996778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:22:47.998774 augenrules[1445]: No rules May 27 03:22:47.999068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:22:48.001025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:22:48.011661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:22:48.012839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:22:48.012887 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:22:48.014236 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:22:48.016554 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:22:48.020044 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:22:48.021228 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:22:48.021262 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:22:48.022388 systemd[1]: Finished ensure-sysext.service. May 27 03:22:48.023681 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:22:48.023953 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:22:48.025450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:22:48.025703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:22:48.027298 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:22:48.027606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:22:48.029078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:22:48.029321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:22:48.031111 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:22:48.031355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:22:48.033078 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:22:48.041780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:22:48.041902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:22:48.045669 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:22:48.056548 systemd-udevd[1453]: Using default interface naming scheme 'v255'. May 27 03:22:48.075012 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:22:48.076886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:22:48.087988 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:22:48.184219 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:22:48.206573 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:22:48.215549 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:22:48.222532 kernel: ACPI: button: Power Button [PWRF] May 27 03:22:48.223246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:22:48.225741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:22:48.242139 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 03:22:48.242427 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:22:48.242627 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:22:48.257581 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:22:48.343244 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:22:48.345336 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:22:48.348617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:22:48.394972 kernel: kvm_amd: TSC scaling supported May 27 03:22:48.395041 kernel: kvm_amd: Nested Virtualization enabled May 27 03:22:48.395077 kernel: kvm_amd: Nested Paging enabled May 27 03:22:48.396186 kernel: kvm_amd: LBR virtualization supported May 27 03:22:48.396225 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:22:48.397589 kernel: kvm_amd: Virtual GIF supported May 27 03:22:48.415263 systemd-networkd[1484]: lo: Link UP May 27 03:22:48.415274 systemd-networkd[1484]: lo: Gained carrier May 27 03:22:48.416918 systemd-networkd[1484]: Enumeration completed May 27 03:22:48.417010 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:22:48.417473 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:48.417484 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:22:48.418175 systemd-networkd[1484]: eth0: Link UP May 27 03:22:48.418371 systemd-networkd[1484]: eth0: Gained carrier May 27 03:22:48.418392 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:22:48.419202 systemd-resolved[1419]: Positive Trust Anchors: May 27 03:22:48.419214 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:22:48.419252 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:22:48.420668 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:22:48.423777 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:22:48.426043 systemd-resolved[1419]: Defaulting to hostname 'linux'. May 27 03:22:48.429658 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:22:48.430461 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. May 27 03:22:48.430585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:22:48.430831 systemd[1]: Reached target network.target - Network. May 27 03:22:48.431190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:22:49.195513 systemd-timesyncd[1463]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:22:49.195641 systemd-timesyncd[1463]: Initial clock synchronization to Tue 2025-05-27 03:22:49.195392 UTC. May 27 03:22:49.201412 systemd-resolved[1419]: Clock change detected. Flushing caches. May 27 03:22:49.218281 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:22:49.237368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:22:49.238813 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:22:49.240088 kernel: EDAC MC: Ver: 3.0.0 May 27 03:22:49.240384 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:22:49.241884 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:22:49.243255 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:22:49.244580 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:22:49.245789 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:22:49.248181 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:22:49.249464 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:22:49.249495 systemd[1]: Reached target paths.target - Path Units. May 27 03:22:49.250412 systemd[1]: Reached target timers.target - Timer Units. May 27 03:22:49.252465 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:22:49.257228 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:22:49.260642 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:22:49.262067 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:22:49.264277 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:22:49.268539 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:22:49.271787 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:22:49.273576 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:22:49.275613 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:22:49.276673 systemd[1]: Reached target basic.target - Basic System. May 27 03:22:49.277683 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:22:49.277713 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:22:49.279017 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:22:49.281323 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:22:49.283282 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:22:49.295903 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:22:49.298146 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:22:49.299325 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:22:49.300793 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:22:49.304074 jq[1544]: false May 27 03:22:49.304230 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:22:49.306654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:22:49.311353 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:22:49.314304 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:22:49.317391 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Refreshing passwd entry cache May 27 03:22:49.317396 oslogin_cache_refresh[1546]: Refreshing passwd entry cache May 27 03:22:49.319979 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:22:49.320392 extend-filesystems[1545]: Found loop3 May 27 03:22:49.321950 extend-filesystems[1545]: Found loop4 May 27 03:22:49.321950 extend-filesystems[1545]: Found loop5 May 27 03:22:49.321950 extend-filesystems[1545]: Found sr0 May 27 03:22:49.321950 extend-filesystems[1545]: Found vda May 27 03:22:49.321950 extend-filesystems[1545]: Found vda1 May 27 03:22:49.321950 extend-filesystems[1545]: Found vda2 May 27 03:22:49.321950 extend-filesystems[1545]: Found vda3 May 27 03:22:49.321950 extend-filesystems[1545]: Found usr May 27 03:22:49.321950 extend-filesystems[1545]: Found vda4 May 27 03:22:49.321950 extend-filesystems[1545]: Found vda6 May 27 03:22:49.321950 extend-filesystems[1545]: Found vda7 May 27 03:22:49.321950 extend-filesystems[1545]: Found vda9 May 27 03:22:49.321950 extend-filesystems[1545]: Checking size of /dev/vda9 May 27 03:22:49.327198 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:22:49.330317 oslogin_cache_refresh[1546]: Failure getting users, quitting May 27 03:22:49.335545 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Failure getting users, quitting May 27 03:22:49.335545 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:22:49.335545 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Refreshing group entry cache May 27 03:22:49.327781 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:22:49.330334 oslogin_cache_refresh[1546]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:22:49.328642 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:22:49.330379 oslogin_cache_refresh[1546]: Refreshing group entry cache May 27 03:22:49.335365 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:22:49.339119 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:22:49.340717 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:22:49.340947 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:22:49.341270 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:22:49.341499 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:22:49.343421 extend-filesystems[1545]: Resized partition /dev/vda9 May 27 03:22:49.344923 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:22:49.345621 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:22:49.345857 jq[1562]: true May 27 03:22:49.346268 extend-filesystems[1568]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:22:49.355265 update_engine[1560]: I20250527 03:22:49.355100 1560 main.cc:92] Flatcar Update Engine starting May 27 03:22:49.363761 jq[1570]: true May 27 03:22:49.364150 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:22:49.369152 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Failure getting groups, quitting May 27 03:22:49.369152 google_oslogin_nss_cache[1546]: oslogin_cache_refresh[1546]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:22:49.369142 oslogin_cache_refresh[1546]: Failure getting groups, quitting May 27 03:22:49.369159 oslogin_cache_refresh[1546]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:22:49.374367 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:22:49.374649 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:22:49.414804 systemd-logind[1554]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:22:49.414837 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:22:49.415120 systemd-logind[1554]: New seat seat0. May 27 03:22:49.421268 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:22:49.493229 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:22:49.495494 tar[1567]: linux-amd64/LICENSE May 27 03:22:49.495795 tar[1567]: linux-amd64/helm May 27 03:22:49.618849 dbus-daemon[1542]: [system] SELinux support is enabled May 27 03:22:49.625349 update_engine[1560]: I20250527 03:22:49.623879 1560 update_check_scheduler.cc:74] Next update check in 7m50s May 27 03:22:49.619110 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:22:49.624870 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 03:22:49.623347 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:22:49.623373 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:22:49.624949 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:22:49.624965 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:22:49.626325 systemd[1]: Started update-engine.service - Update Engine. May 27 03:22:49.629484 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:22:49.648364 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:22:49.666930 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:22:49.694143 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:22:49.694143 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:22:49.694143 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:22:49.700299 extend-filesystems[1545]: Resized filesystem in /dev/vda9 May 27 03:22:49.703020 bash[1599]: Updated "/home/core/.ssh/authorized_keys" May 27 03:22:49.696290 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:22:49.696582 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:22:49.701337 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:22:49.706592 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:22:49.788337 containerd[1579]: time="2025-05-27T03:22:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:22:49.790941 containerd[1579]: time="2025-05-27T03:22:49.790885446Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:22:49.803321 containerd[1579]: time="2025-05-27T03:22:49.803256165Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.571µs" May 27 03:22:49.803321 containerd[1579]: time="2025-05-27T03:22:49.803303865Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:22:49.803321 containerd[1579]: time="2025-05-27T03:22:49.803327088Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:22:49.803602 containerd[1579]: time="2025-05-27T03:22:49.803574061Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:22:49.803602 containerd[1579]: time="2025-05-27T03:22:49.803596213Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:22:49.803651 containerd[1579]: time="2025-05-27T03:22:49.803627662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:22:49.803733 containerd[1579]: time="2025-05-27T03:22:49.803704516Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:22:49.803733 containerd[1579]: time="2025-05-27T03:22:49.803722590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:22:49.804110 containerd[1579]: time="2025-05-27T03:22:49.804078668Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:22:49.804110 containerd[1579]: time="2025-05-27T03:22:49.804100549Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:22:49.804159 containerd[1579]: time="2025-05-27T03:22:49.804113433Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:22:49.804159 containerd[1579]: time="2025-05-27T03:22:49.804133591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:22:49.804322 containerd[1579]: time="2025-05-27T03:22:49.804288671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:22:49.804643 containerd[1579]: time="2025-05-27T03:22:49.804615755Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:22:49.804664 containerd[1579]: time="2025-05-27T03:22:49.804654447Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:22:49.804685 containerd[1579]: time="2025-05-27T03:22:49.804666199Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:22:49.804784 containerd[1579]: time="2025-05-27T03:22:49.804709901Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:22:49.805036 containerd[1579]: time="2025-05-27T03:22:49.805014713Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:22:49.805115 containerd[1579]: time="2025-05-27T03:22:49.805097027Z" level=info msg="metadata content store policy set" policy=shared May 27 03:22:49.813954 containerd[1579]: time="2025-05-27T03:22:49.813914573Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.813967893Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.813983002Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.813995155Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.814007408Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.814018228Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.814034548Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.814046030Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:22:49.814056 containerd[1579]: time="2025-05-27T03:22:49.814057181Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814072089Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814080575Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814092167Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814303002Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814323110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814335403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814345512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814355751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814365870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814382581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814392079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814402749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:22:49.814518 containerd[1579]: time="2025-05-27T03:22:49.814412557Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:22:49.814876 containerd[1579]: time="2025-05-27T03:22:49.814664390Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:22:49.814876 containerd[1579]: time="2025-05-27T03:22:49.814781058Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:22:49.814876 containerd[1579]: time="2025-05-27T03:22:49.814799583Z" level=info msg="Start snapshots syncer" May 27 03:22:49.814876 containerd[1579]: time="2025-05-27T03:22:49.814816866Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:22:49.815297 containerd[1579]: time="2025-05-27T03:22:49.815240741Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:22:49.815447 containerd[1579]: time="2025-05-27T03:22:49.815321111Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:22:49.816549 containerd[1579]: time="2025-05-27T03:22:49.816516222Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:22:49.817143 containerd[1579]: time="2025-05-27T03:22:49.817094878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:22:49.817391 containerd[1579]: time="2025-05-27T03:22:49.817327925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:22:49.817391 containerd[1579]: time="2025-05-27T03:22:49.817351309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:22:49.817391 containerd[1579]: time="2025-05-27T03:22:49.817364323Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:22:49.817598 containerd[1579]: time="2025-05-27T03:22:49.817566853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:22:49.817686 containerd[1579]: time="2025-05-27T03:22:49.817671750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:22:49.817820 containerd[1579]: time="2025-05-27T03:22:49.817746199Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:22:49.817820 containerd[1579]: time="2025-05-27T03:22:49.817778630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:22:49.817905 containerd[1579]: time="2025-05-27T03:22:49.817792205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:22:49.817987 containerd[1579]: time="2025-05-27T03:22:49.817953909Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:22:49.818950 containerd[1579]: time="2025-05-27T03:22:49.818931522Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819000662Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819012814Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819023064Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819032341Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819043622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819055344Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819075031Z" level=info msg="runtime interface created" May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819081042Z" level=info msg="created NRI interface" May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819090831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819102763Z" level=info msg="Connect containerd service" May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.819145894Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:22:49.820245 containerd[1579]: time="2025-05-27T03:22:49.820064527Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:22:49.918774 containerd[1579]: time="2025-05-27T03:22:49.918656131Z" level=info msg="Start subscribing containerd event" May 27 03:22:49.918774 containerd[1579]: time="2025-05-27T03:22:49.918728287Z" level=info msg="Start recovering state" May 27 03:22:49.918916 containerd[1579]: time="2025-05-27T03:22:49.918836219Z" level=info msg="Start event monitor" May 27 03:22:49.918916 containerd[1579]: time="2025-05-27T03:22:49.918850796Z" level=info msg="Start cni network conf syncer for default" May 27 03:22:49.918916 containerd[1579]: time="2025-05-27T03:22:49.918859823Z" level=info msg="Start streaming server" May 27 03:22:49.918916 containerd[1579]: time="2025-05-27T03:22:49.918871395Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:22:49.918916 containerd[1579]: time="2025-05-27T03:22:49.918880933Z" level=info msg="runtime interface starting up..." May 27 03:22:49.918916 containerd[1579]: time="2025-05-27T03:22:49.918889930Z" level=info msg="starting plugins..." May 27 03:22:49.919063 containerd[1579]: time="2025-05-27T03:22:49.918908965Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:22:49.919329 containerd[1579]: time="2025-05-27T03:22:49.919297284Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:22:49.919495 containerd[1579]: time="2025-05-27T03:22:49.919478383Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:22:49.919794 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:22:49.921379 containerd[1579]: time="2025-05-27T03:22:49.921360142Z" level=info msg="containerd successfully booted in 0.133934s" May 27 03:22:49.926934 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:22:49.952650 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:22:49.955932 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:22:49.960235 tar[1567]: linux-amd64/README.md May 27 03:22:49.984469 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:22:49.984809 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:22:49.988457 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:22:49.990404 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:22:50.012832 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:22:50.016312 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:22:50.019166 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:22:50.020925 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:22:50.236424 systemd-networkd[1484]: eth0: Gained IPv6LL May 27 03:22:50.239579 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:22:50.241482 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:22:50.244330 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:22:50.247478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:22:50.271575 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:22:50.295155 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:22:50.296979 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:22:50.297316 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:22:50.299844 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:22:51.001319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:22:51.004758 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:22:51.007300 systemd[1]: Startup finished in 3.067s (kernel) + 6.144s (initrd) + 4.489s (userspace) = 13.702s. May 27 03:22:51.015743 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:22:51.436035 kubelet[1673]: E0527 03:22:51.435900 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:22:51.439861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:22:51.440050 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:22:51.440469 systemd[1]: kubelet.service: Consumed 987ms CPU time, 266.2M memory peak. May 27 03:22:53.488663 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:22:53.489911 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:49964.service - OpenSSH per-connection server daemon (10.0.0.1:49964). May 27 03:22:53.543806 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 49964 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:53.545827 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:53.552547 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:22:53.553639 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:22:53.560749 systemd-logind[1554]: New session 1 of user core. May 27 03:22:53.580137 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:22:53.583273 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:22:53.597848 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:22:53.600286 systemd-logind[1554]: New session c1 of user core. May 27 03:22:53.751396 systemd[1690]: Queued start job for default target default.target. May 27 03:22:53.760462 systemd[1690]: Created slice app.slice - User Application Slice. May 27 03:22:53.760489 systemd[1690]: Reached target paths.target - Paths. May 27 03:22:53.760529 systemd[1690]: Reached target timers.target - Timers. May 27 03:22:53.762243 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:22:53.774004 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:22:53.774163 systemd[1690]: Reached target sockets.target - Sockets. May 27 03:22:53.774226 systemd[1690]: Reached target basic.target - Basic System. May 27 03:22:53.774268 systemd[1690]: Reached target default.target - Main User Target. May 27 03:22:53.774301 systemd[1690]: Startup finished in 167ms. May 27 03:22:53.774600 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:22:53.776230 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:22:53.842441 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:49966.service - OpenSSH per-connection server daemon (10.0.0.1:49966). May 27 03:22:53.890814 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 49966 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:53.892495 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:53.897132 systemd-logind[1554]: New session 2 of user core. May 27 03:22:53.913343 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:22:53.967790 sshd[1703]: Connection closed by 10.0.0.1 port 49966 May 27 03:22:53.968131 sshd-session[1701]: pam_unix(sshd:session): session closed for user core May 27 03:22:53.981568 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:49966.service: Deactivated successfully. May 27 03:22:53.983519 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:22:53.984297 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. May 27 03:22:53.987869 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:49978.service - OpenSSH per-connection server daemon (10.0.0.1:49978). May 27 03:22:53.988663 systemd-logind[1554]: Removed session 2. May 27 03:22:54.051707 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 49978 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:54.053258 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:54.057651 systemd-logind[1554]: New session 3 of user core. May 27 03:22:54.067356 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:22:54.117492 sshd[1711]: Connection closed by 10.0.0.1 port 49978 May 27 03:22:54.117564 sshd-session[1709]: pam_unix(sshd:session): session closed for user core May 27 03:22:54.127081 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:49978.service: Deactivated successfully. May 27 03:22:54.128915 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:22:54.129726 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. May 27 03:22:54.132956 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:49988.service - OpenSSH per-connection server daemon (10.0.0.1:49988). May 27 03:22:54.133688 systemd-logind[1554]: Removed session 3. May 27 03:22:54.194390 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 49988 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:54.195925 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:54.200577 systemd-logind[1554]: New session 4 of user core. May 27 03:22:54.210326 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:22:54.266279 sshd[1719]: Connection closed by 10.0.0.1 port 49988 May 27 03:22:54.266626 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 27 03:22:54.280610 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:49988.service: Deactivated successfully. May 27 03:22:54.282358 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:22:54.283188 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. May 27 03:22:54.286111 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:49994.service - OpenSSH per-connection server daemon (10.0.0.1:49994). May 27 03:22:54.286837 systemd-logind[1554]: Removed session 4. May 27 03:22:54.334285 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 49994 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:54.335665 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:54.340727 systemd-logind[1554]: New session 5 of user core. May 27 03:22:54.350385 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:22:54.409927 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:22:54.410338 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:54.435882 sudo[1728]: pam_unix(sudo:session): session closed for user root May 27 03:22:54.438091 sshd[1727]: Connection closed by 10.0.0.1 port 49994 May 27 03:22:54.438582 sshd-session[1725]: pam_unix(sshd:session): session closed for user core May 27 03:22:54.449053 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:49994.service: Deactivated successfully. May 27 03:22:54.450933 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:22:54.451702 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. May 27 03:22:54.454763 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:50000.service - OpenSSH per-connection server daemon (10.0.0.1:50000). May 27 03:22:54.455288 systemd-logind[1554]: Removed session 5. May 27 03:22:54.507166 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 50000 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:54.508914 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:54.513772 systemd-logind[1554]: New session 6 of user core. May 27 03:22:54.524354 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:22:54.577902 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:22:54.578235 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:54.586425 sudo[1738]: pam_unix(sudo:session): session closed for user root May 27 03:22:54.592618 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:22:54.592912 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:54.602527 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:22:54.651060 augenrules[1760]: No rules May 27 03:22:54.653932 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:22:54.654266 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:22:54.655633 sudo[1737]: pam_unix(sudo:session): session closed for user root May 27 03:22:54.657431 sshd[1736]: Connection closed by 10.0.0.1 port 50000 May 27 03:22:54.657723 sshd-session[1734]: pam_unix(sshd:session): session closed for user core May 27 03:22:54.674504 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:50000.service: Deactivated successfully. May 27 03:22:54.676252 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:22:54.677068 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. May 27 03:22:54.679712 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:50010.service - OpenSSH per-connection server daemon (10.0.0.1:50010). May 27 03:22:54.680458 systemd-logind[1554]: Removed session 6. May 27 03:22:54.740513 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 50010 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:22:54.742304 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:22:54.747170 systemd-logind[1554]: New session 7 of user core. May 27 03:22:54.760358 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:22:54.815549 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:22:54.815923 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:22:55.135069 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:22:55.158845 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:22:55.402637 dockerd[1792]: time="2025-05-27T03:22:55.402467465Z" level=info msg="Starting up" May 27 03:22:55.404287 dockerd[1792]: time="2025-05-27T03:22:55.404237885Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:22:56.422455 dockerd[1792]: time="2025-05-27T03:22:56.422389984Z" level=info msg="Loading containers: start." May 27 03:22:56.434249 kernel: Initializing XFRM netlink socket May 27 03:22:56.701514 systemd-networkd[1484]: docker0: Link UP May 27 03:22:56.706820 dockerd[1792]: time="2025-05-27T03:22:56.706774396Z" level=info msg="Loading containers: done." May 27 03:22:56.725022 dockerd[1792]: time="2025-05-27T03:22:56.724971893Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:22:56.725179 dockerd[1792]: time="2025-05-27T03:22:56.725054688Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:22:56.725179 dockerd[1792]: time="2025-05-27T03:22:56.725161859Z" level=info msg="Initializing buildkit" May 27 03:22:56.759873 dockerd[1792]: time="2025-05-27T03:22:56.759806470Z" level=info msg="Completed buildkit initialization" May 27 03:22:56.763950 dockerd[1792]: time="2025-05-27T03:22:56.763925024Z" level=info msg="Daemon has completed initialization" May 27 03:22:56.764029 dockerd[1792]: time="2025-05-27T03:22:56.763989334Z" level=info msg="API listen on /run/docker.sock" May 27 03:22:56.764237 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:22:57.317261 containerd[1579]: time="2025-05-27T03:22:57.317194041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 03:22:57.998481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911152409.mount: Deactivated successfully. May 27 03:22:59.533160 containerd[1579]: time="2025-05-27T03:22:59.533095848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:59.534072 containerd[1579]: time="2025-05-27T03:22:59.534013559Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 03:22:59.536052 containerd[1579]: time="2025-05-27T03:22:59.536001687Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:59.538474 containerd[1579]: time="2025-05-27T03:22:59.538447314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:22:59.539491 containerd[1579]: time="2025-05-27T03:22:59.539458500Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 2.222204537s" May 27 03:22:59.539550 containerd[1579]: time="2025-05-27T03:22:59.539508133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 03:22:59.540133 containerd[1579]: time="2025-05-27T03:22:59.540105954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 03:23:01.131867 containerd[1579]: time="2025-05-27T03:23:01.131753211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:01.181404 containerd[1579]: time="2025-05-27T03:23:01.181311841Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 03:23:01.222041 containerd[1579]: time="2025-05-27T03:23:01.221972524Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:01.270643 containerd[1579]: time="2025-05-27T03:23:01.270560404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:01.271676 containerd[1579]: time="2025-05-27T03:23:01.271625511Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.731437474s" May 27 03:23:01.271676 containerd[1579]: time="2025-05-27T03:23:01.271665446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 03:23:01.272290 containerd[1579]: time="2025-05-27T03:23:01.272267826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 03:23:01.505455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:23:01.507653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:01.746577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:01.771842 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:02.024286 kubelet[2067]: E0527 03:23:02.024028 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:02.032160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:02.032405 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:02.032773 systemd[1]: kubelet.service: Consumed 278ms CPU time, 110.5M memory peak. May 27 03:23:03.460512 containerd[1579]: time="2025-05-27T03:23:03.460451000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:03.461415 containerd[1579]: time="2025-05-27T03:23:03.461384932Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 03:23:03.462857 containerd[1579]: time="2025-05-27T03:23:03.462813341Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:03.465296 containerd[1579]: time="2025-05-27T03:23:03.465263045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:03.466189 containerd[1579]: time="2025-05-27T03:23:03.466156891Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 2.193786032s" May 27 03:23:03.466189 containerd[1579]: time="2025-05-27T03:23:03.466186577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 03:23:03.466744 containerd[1579]: time="2025-05-27T03:23:03.466672278Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 03:23:04.389958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333023344.mount: Deactivated successfully. May 27 03:23:04.707776 containerd[1579]: time="2025-05-27T03:23:04.707594560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:04.709049 containerd[1579]: time="2025-05-27T03:23:04.709001248Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 03:23:04.710478 containerd[1579]: time="2025-05-27T03:23:04.710451247Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:04.712497 containerd[1579]: time="2025-05-27T03:23:04.712451568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:04.713111 containerd[1579]: time="2025-05-27T03:23:04.713058917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.246318322s" May 27 03:23:04.713111 containerd[1579]: time="2025-05-27T03:23:04.713100245Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 03:23:04.713684 containerd[1579]: time="2025-05-27T03:23:04.713636881Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 03:23:05.231457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3245314945.mount: Deactivated successfully. May 27 03:23:06.306386 containerd[1579]: time="2025-05-27T03:23:06.306329829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.307469 containerd[1579]: time="2025-05-27T03:23:06.307399325Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 03:23:06.308613 containerd[1579]: time="2025-05-27T03:23:06.308577895Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.311274 containerd[1579]: time="2025-05-27T03:23:06.311235379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:06.312138 containerd[1579]: time="2025-05-27T03:23:06.312109348Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.598443012s" May 27 03:23:06.312138 containerd[1579]: time="2025-05-27T03:23:06.312135306Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 03:23:06.312692 containerd[1579]: time="2025-05-27T03:23:06.312636005Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:23:07.834340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount197222643.mount: Deactivated successfully. May 27 03:23:07.886760 containerd[1579]: time="2025-05-27T03:23:07.886680130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:07.888442 containerd[1579]: time="2025-05-27T03:23:07.888403161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:23:07.890037 containerd[1579]: time="2025-05-27T03:23:07.889961474Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:07.892464 containerd[1579]: time="2025-05-27T03:23:07.892424282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:23:07.893055 containerd[1579]: time="2025-05-27T03:23:07.893022574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.580332288s" May 27 03:23:07.893055 containerd[1579]: time="2025-05-27T03:23:07.893049956Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:23:07.893597 containerd[1579]: time="2025-05-27T03:23:07.893573918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:23:10.026617 containerd[1579]: time="2025-05-27T03:23:10.026549098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:10.029737 containerd[1579]: time="2025-05-27T03:23:10.029688035Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 03:23:10.031297 containerd[1579]: time="2025-05-27T03:23:10.031259021Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:10.034041 containerd[1579]: time="2025-05-27T03:23:10.033994641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:10.035126 containerd[1579]: time="2025-05-27T03:23:10.035093682Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.141493916s" May 27 03:23:10.035194 containerd[1579]: time="2025-05-27T03:23:10.035130101Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:23:12.255048 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:23:12.256891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:12.556346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:12.577802 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:23:12.645777 kubelet[2187]: E0527 03:23:12.645693 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:23:12.650858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:23:12.651119 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:23:12.651633 systemd[1]: kubelet.service: Consumed 246ms CPU time, 111M memory peak. May 27 03:23:13.131342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:13.131577 systemd[1]: kubelet.service: Consumed 246ms CPU time, 111M memory peak. May 27 03:23:13.134705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:13.164055 systemd[1]: Reload requested from client PID 2204 ('systemctl') (unit session-7.scope)... May 27 03:23:13.164076 systemd[1]: Reloading... May 27 03:23:13.263674 zram_generator::config[2250]: No configuration found. May 27 03:23:13.888665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:14.013136 systemd[1]: Reloading finished in 848 ms. May 27 03:23:14.080172 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:23:14.080312 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:23:14.080655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:14.080709 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.2M memory peak. May 27 03:23:14.082686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:14.259232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:14.274598 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:14.318616 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:14.318616 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:23:14.318616 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:14.319080 kubelet[2295]: I0527 03:23:14.318655 2295 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:14.472900 kubelet[2295]: I0527 03:23:14.472848 2295 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:23:14.472900 kubelet[2295]: I0527 03:23:14.472880 2295 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:14.473160 kubelet[2295]: I0527 03:23:14.473136 2295 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:23:14.502807 kubelet[2295]: E0527 03:23:14.502763 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:23:14.503773 kubelet[2295]: I0527 03:23:14.503694 2295 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:14.510352 kubelet[2295]: I0527 03:23:14.510250 2295 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:14.516358 kubelet[2295]: I0527 03:23:14.516328 2295 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:14.516630 kubelet[2295]: I0527 03:23:14.516591 2295 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:14.516792 kubelet[2295]: I0527 03:23:14.516619 2295 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:14.516792 kubelet[2295]: I0527 03:23:14.516792 2295 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:14.517013 kubelet[2295]: I0527 03:23:14.516800 2295 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:23:14.517580 kubelet[2295]: I0527 03:23:14.517554 2295 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:14.519796 kubelet[2295]: I0527 03:23:14.519760 2295 kubelet.go:480] "Attempting to sync node with API server" May 27 03:23:14.519796 kubelet[2295]: I0527 03:23:14.519783 2295 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:14.519869 kubelet[2295]: I0527 03:23:14.519811 2295 kubelet.go:386] "Adding apiserver pod source" May 27 03:23:14.519869 kubelet[2295]: I0527 03:23:14.519835 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:14.525870 kubelet[2295]: I0527 03:23:14.525834 2295 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:14.526386 kubelet[2295]: I0527 03:23:14.526364 2295 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:23:14.527565 kubelet[2295]: W0527 03:23:14.527544 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:23:14.527628 kubelet[2295]: E0527 03:23:14.527561 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:23:14.527735 kubelet[2295]: E0527 03:23:14.527706 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:23:14.530880 kubelet[2295]: I0527 03:23:14.530855 2295 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:23:14.530947 kubelet[2295]: I0527 03:23:14.530909 2295 server.go:1289] "Started kubelet" May 27 03:23:14.531513 kubelet[2295]: I0527 03:23:14.531459 2295 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:14.532419 kubelet[2295]: I0527 03:23:14.532400 2295 server.go:317] "Adding debug handlers to kubelet server" May 27 03:23:14.533022 kubelet[2295]: I0527 03:23:14.532996 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:14.533572 kubelet[2295]: I0527 03:23:14.533495 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:14.533875 kubelet[2295]: I0527 03:23:14.533850 2295 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:14.534987 kubelet[2295]: I0527 03:23:14.534964 2295 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:14.535985 kubelet[2295]: E0527 03:23:14.535951 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:14.535985 kubelet[2295]: E0527 03:23:14.534835 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18434450e21362d3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:23:14.530878163 +0000 UTC m=+0.251772604,LastTimestamp:2025-05-27 03:23:14.530878163 +0000 UTC m=+0.251772604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:23:14.535985 kubelet[2295]: I0527 03:23:14.535981 2295 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:23:14.541316 kubelet[2295]: E0527 03:23:14.538685 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" May 27 03:23:14.541316 kubelet[2295]: E0527 03:23:14.538789 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:23:14.541316 kubelet[2295]: I0527 03:23:14.539055 2295 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:14.541316 kubelet[2295]: E0527 03:23:14.540942 2295 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:14.541316 kubelet[2295]: I0527 03:23:14.541148 2295 factory.go:223] Registration of the containerd container factory successfully May 27 03:23:14.541316 kubelet[2295]: I0527 03:23:14.541157 2295 factory.go:223] Registration of the systemd container factory successfully May 27 03:23:14.548912 kubelet[2295]: I0527 03:23:14.548886 2295 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:14.549309 kubelet[2295]: I0527 03:23:14.549269 2295 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:23:14.560153 kubelet[2295]: I0527 03:23:14.560105 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:23:14.561592 kubelet[2295]: I0527 03:23:14.561566 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:23:14.561592 kubelet[2295]: I0527 03:23:14.561589 2295 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:23:14.561667 kubelet[2295]: I0527 03:23:14.561604 2295 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:23:14.561667 kubelet[2295]: I0527 03:23:14.561617 2295 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:23:14.561667 kubelet[2295]: I0527 03:23:14.561634 2295 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:14.562549 kubelet[2295]: E0527 03:23:14.562276 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:23:14.562977 kubelet[2295]: I0527 03:23:14.561608 2295 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:23:14.562977 kubelet[2295]: I0527 03:23:14.562696 2295 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:23:14.562977 kubelet[2295]: E0527 03:23:14.562759 2295 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:23:14.636631 kubelet[2295]: E0527 03:23:14.636519 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:14.663162 kubelet[2295]: E0527 03:23:14.663107 2295 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:23:14.737481 kubelet[2295]: E0527 03:23:14.737427 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:14.740171 kubelet[2295]: E0527 03:23:14.740123 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" May 27 03:23:14.838676 kubelet[2295]: E0527 03:23:14.838498 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:14.864160 kubelet[2295]: E0527 03:23:14.864099 2295 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:23:14.939681 kubelet[2295]: E0527 03:23:14.939633 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.040079 kubelet[2295]: E0527 03:23:15.039969 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.141111 kubelet[2295]: E0527 03:23:15.140952 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.141519 kubelet[2295]: E0527 03:23:15.141477 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" May 27 03:23:15.241415 kubelet[2295]: E0527 03:23:15.241360 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.265083 kubelet[2295]: E0527 03:23:15.265005 2295 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:23:15.341672 kubelet[2295]: E0527 03:23:15.341603 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.426972 kubelet[2295]: E0527 03:23:15.426823 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:23:15.442662 kubelet[2295]: E0527 03:23:15.442600 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.543373 kubelet[2295]: E0527 03:23:15.543320 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.643473 kubelet[2295]: E0527 03:23:15.643401 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.740841 kubelet[2295]: E0527 03:23:15.740706 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:23:15.744035 kubelet[2295]: E0527 03:23:15.744005 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.754762 kubelet[2295]: E0527 03:23:15.754709 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:23:15.799477 kubelet[2295]: I0527 03:23:15.799424 2295 policy_none.go:49] "None policy: Start" May 27 03:23:15.799477 kubelet[2295]: I0527 03:23:15.799469 2295 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:23:15.799477 kubelet[2295]: I0527 03:23:15.799488 2295 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:15.810950 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:23:15.829945 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:23:15.833652 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:23:15.844744 kubelet[2295]: E0527 03:23:15.844720 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:15.854121 kubelet[2295]: E0527 03:23:15.854075 2295 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:23:15.854372 kubelet[2295]: I0527 03:23:15.854350 2295 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:15.854432 kubelet[2295]: I0527 03:23:15.854366 2295 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:15.854693 kubelet[2295]: I0527 03:23:15.854617 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:15.855567 kubelet[2295]: E0527 03:23:15.855546 2295 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:23:15.855677 kubelet[2295]: E0527 03:23:15.855657 2295 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:23:15.868115 kubelet[2295]: E0527 03:23:15.868090 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:23:15.942272 kubelet[2295]: E0527 03:23:15.942230 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" May 27 03:23:15.955700 kubelet[2295]: I0527 03:23:15.955657 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:23:15.956048 kubelet[2295]: E0527 03:23:15.956013 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" May 27 03:23:16.076144 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 27 03:23:16.094314 kubelet[2295]: E0527 03:23:16.094260 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:16.097416 systemd[1]: Created slice kubepods-burstable-pode07898026fbb39e4af8f144aca1230f6.slice - libcontainer container kubepods-burstable-pode07898026fbb39e4af8f144aca1230f6.slice. May 27 03:23:16.109587 kubelet[2295]: E0527 03:23:16.109548 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:16.112539 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 27 03:23:16.114398 kubelet[2295]: E0527 03:23:16.114363 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:16.158189 kubelet[2295]: I0527 03:23:16.158140 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:23:16.158613 kubelet[2295]: E0527 03:23:16.158572 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" May 27 03:23:16.158613 kubelet[2295]: I0527 03:23:16.158585 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e07898026fbb39e4af8f144aca1230f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07898026fbb39e4af8f144aca1230f6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:23:16.158712 kubelet[2295]: I0527 03:23:16.158641 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e07898026fbb39e4af8f144aca1230f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e07898026fbb39e4af8f144aca1230f6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:23:16.158712 kubelet[2295]: I0527 03:23:16.158668 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:16.158712 kubelet[2295]: I0527 03:23:16.158685 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:16.158712 kubelet[2295]: I0527 03:23:16.158699 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:16.158840 kubelet[2295]: I0527 03:23:16.158721 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 03:23:16.158840 kubelet[2295]: I0527 03:23:16.158758 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e07898026fbb39e4af8f144aca1230f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07898026fbb39e4af8f144aca1230f6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:23:16.158840 kubelet[2295]: I0527 03:23:16.158773 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:16.158840 kubelet[2295]: I0527 03:23:16.158801 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:16.396535 containerd[1579]: time="2025-05-27T03:23:16.396396025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 27 03:23:16.411273 containerd[1579]: time="2025-05-27T03:23:16.411230154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e07898026fbb39e4af8f144aca1230f6,Namespace:kube-system,Attempt:0,}" May 27 03:23:16.415945 containerd[1579]: time="2025-05-27T03:23:16.415915050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 27 03:23:16.425160 containerd[1579]: time="2025-05-27T03:23:16.425113290Z" level=info msg="connecting to shim 721f87e69b079b2224a25d277a4cf3a3fded98f792ed66d004145e46f9c90629" address="unix:///run/containerd/s/c2c9bd78785e7f668b454b2e4692505b474ea6a3475cfe36bb4595c5d81ea164" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:16.451231 containerd[1579]: time="2025-05-27T03:23:16.451131734Z" level=info msg="connecting to shim 25330b6770342a161e86b1193a0ddd303ad7978db58a3b94331dce529fb89368" address="unix:///run/containerd/s/ba4c58b521ff2a9874043d281bc998fc06a6dcbeaf26d4cc89f444a8ef4d35b0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:16.452340 systemd[1]: Started cri-containerd-721f87e69b079b2224a25d277a4cf3a3fded98f792ed66d004145e46f9c90629.scope - libcontainer container 721f87e69b079b2224a25d277a4cf3a3fded98f792ed66d004145e46f9c90629. May 27 03:23:16.459594 containerd[1579]: time="2025-05-27T03:23:16.459394660Z" level=info msg="connecting to shim 14bf18a3e6a99fd1aeb33e83efb2e0f9bcde724f2ee2a414ed592bc3b3d59793" address="unix:///run/containerd/s/13f20ee42cd840ca16e61ac55f6e998757b6a3306b873bbd9964ff46511caa76" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:16.482347 systemd[1]: Started cri-containerd-25330b6770342a161e86b1193a0ddd303ad7978db58a3b94331dce529fb89368.scope - libcontainer container 25330b6770342a161e86b1193a0ddd303ad7978db58a3b94331dce529fb89368. May 27 03:23:16.486448 systemd[1]: Started cri-containerd-14bf18a3e6a99fd1aeb33e83efb2e0f9bcde724f2ee2a414ed592bc3b3d59793.scope - libcontainer container 14bf18a3e6a99fd1aeb33e83efb2e0f9bcde724f2ee2a414ed592bc3b3d59793. May 27 03:23:16.511460 containerd[1579]: time="2025-05-27T03:23:16.511413103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"721f87e69b079b2224a25d277a4cf3a3fded98f792ed66d004145e46f9c90629\"" May 27 03:23:16.519718 containerd[1579]: time="2025-05-27T03:23:16.519656933Z" level=info msg="CreateContainer within sandbox \"721f87e69b079b2224a25d277a4cf3a3fded98f792ed66d004145e46f9c90629\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:23:16.531946 containerd[1579]: time="2025-05-27T03:23:16.531848837Z" level=info msg="Container 6fd71b8df0840d6bf42139b1fa7ad4783ac157e23150f6976d658c9ed4dd1c33: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:16.535121 containerd[1579]: time="2025-05-27T03:23:16.535082602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"14bf18a3e6a99fd1aeb33e83efb2e0f9bcde724f2ee2a414ed592bc3b3d59793\"" May 27 03:23:16.541725 containerd[1579]: time="2025-05-27T03:23:16.541524032Z" level=info msg="CreateContainer within sandbox \"14bf18a3e6a99fd1aeb33e83efb2e0f9bcde724f2ee2a414ed592bc3b3d59793\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:23:16.545310 containerd[1579]: time="2025-05-27T03:23:16.545268855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e07898026fbb39e4af8f144aca1230f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"25330b6770342a161e86b1193a0ddd303ad7978db58a3b94331dce529fb89368\"" May 27 03:23:16.546142 containerd[1579]: time="2025-05-27T03:23:16.546110753Z" level=info msg="CreateContainer within sandbox \"721f87e69b079b2224a25d277a4cf3a3fded98f792ed66d004145e46f9c90629\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6fd71b8df0840d6bf42139b1fa7ad4783ac157e23150f6976d658c9ed4dd1c33\"" May 27 03:23:16.546524 containerd[1579]: time="2025-05-27T03:23:16.546502047Z" level=info msg="StartContainer for \"6fd71b8df0840d6bf42139b1fa7ad4783ac157e23150f6976d658c9ed4dd1c33\"" May 27 03:23:16.547419 containerd[1579]: time="2025-05-27T03:23:16.547389411Z" level=info msg="connecting to shim 6fd71b8df0840d6bf42139b1fa7ad4783ac157e23150f6976d658c9ed4dd1c33" address="unix:///run/containerd/s/c2c9bd78785e7f668b454b2e4692505b474ea6a3475cfe36bb4595c5d81ea164" protocol=ttrpc version=3 May 27 03:23:16.553086 containerd[1579]: time="2025-05-27T03:23:16.553059715Z" level=info msg="Container 6de94454fa40e59742076bfaeef64a663f0d886e889a1222ffeb6f9a0c24a8f7: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:16.553443 containerd[1579]: time="2025-05-27T03:23:16.553411465Z" level=info msg="CreateContainer within sandbox \"25330b6770342a161e86b1193a0ddd303ad7978db58a3b94331dce529fb89368\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:23:16.560777 kubelet[2295]: I0527 03:23:16.560747 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:23:16.561470 kubelet[2295]: E0527 03:23:16.561248 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" May 27 03:23:16.568366 systemd[1]: Started cri-containerd-6fd71b8df0840d6bf42139b1fa7ad4783ac157e23150f6976d658c9ed4dd1c33.scope - libcontainer container 6fd71b8df0840d6bf42139b1fa7ad4783ac157e23150f6976d658c9ed4dd1c33. May 27 03:23:16.568687 containerd[1579]: time="2025-05-27T03:23:16.568387610Z" level=info msg="CreateContainer within sandbox \"14bf18a3e6a99fd1aeb33e83efb2e0f9bcde724f2ee2a414ed592bc3b3d59793\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6de94454fa40e59742076bfaeef64a663f0d886e889a1222ffeb6f9a0c24a8f7\"" May 27 03:23:16.569802 containerd[1579]: time="2025-05-27T03:23:16.569772918Z" level=info msg="StartContainer for \"6de94454fa40e59742076bfaeef64a663f0d886e889a1222ffeb6f9a0c24a8f7\"" May 27 03:23:16.570253 containerd[1579]: time="2025-05-27T03:23:16.570222812Z" level=info msg="Container 30ec6a9e676f044f19d25a8a39f850514e3c1e12ee0736683d9cad834660f729: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:16.574986 containerd[1579]: time="2025-05-27T03:23:16.574941461Z" level=info msg="connecting to shim 6de94454fa40e59742076bfaeef64a663f0d886e889a1222ffeb6f9a0c24a8f7" address="unix:///run/containerd/s/13f20ee42cd840ca16e61ac55f6e998757b6a3306b873bbd9964ff46511caa76" protocol=ttrpc version=3 May 27 03:23:16.583658 containerd[1579]: time="2025-05-27T03:23:16.583611982Z" level=info msg="CreateContainer within sandbox \"25330b6770342a161e86b1193a0ddd303ad7978db58a3b94331dce529fb89368\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30ec6a9e676f044f19d25a8a39f850514e3c1e12ee0736683d9cad834660f729\"" May 27 03:23:16.584597 containerd[1579]: time="2025-05-27T03:23:16.584561743Z" level=info msg="StartContainer for \"30ec6a9e676f044f19d25a8a39f850514e3c1e12ee0736683d9cad834660f729\"" May 27 03:23:16.585091 kubelet[2295]: E0527 03:23:16.585029 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:23:16.586423 containerd[1579]: time="2025-05-27T03:23:16.586385443Z" level=info msg="connecting to shim 30ec6a9e676f044f19d25a8a39f850514e3c1e12ee0736683d9cad834660f729" address="unix:///run/containerd/s/ba4c58b521ff2a9874043d281bc998fc06a6dcbeaf26d4cc89f444a8ef4d35b0" protocol=ttrpc version=3 May 27 03:23:16.597337 systemd[1]: Started cri-containerd-6de94454fa40e59742076bfaeef64a663f0d886e889a1222ffeb6f9a0c24a8f7.scope - libcontainer container 6de94454fa40e59742076bfaeef64a663f0d886e889a1222ffeb6f9a0c24a8f7. May 27 03:23:16.602608 systemd[1]: Started cri-containerd-30ec6a9e676f044f19d25a8a39f850514e3c1e12ee0736683d9cad834660f729.scope - libcontainer container 30ec6a9e676f044f19d25a8a39f850514e3c1e12ee0736683d9cad834660f729. May 27 03:23:16.622833 containerd[1579]: time="2025-05-27T03:23:16.622755550Z" level=info msg="StartContainer for \"6fd71b8df0840d6bf42139b1fa7ad4783ac157e23150f6976d658c9ed4dd1c33\" returns successfully" May 27 03:23:16.648871 containerd[1579]: time="2025-05-27T03:23:16.648737866Z" level=info msg="StartContainer for \"6de94454fa40e59742076bfaeef64a663f0d886e889a1222ffeb6f9a0c24a8f7\" returns successfully" May 27 03:23:16.659531 containerd[1579]: time="2025-05-27T03:23:16.659495691Z" level=info msg="StartContainer for \"30ec6a9e676f044f19d25a8a39f850514e3c1e12ee0736683d9cad834660f729\" returns successfully" May 27 03:23:17.366396 kubelet[2295]: I0527 03:23:17.366364 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:23:17.583982 kubelet[2295]: E0527 03:23:17.583939 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:17.585935 kubelet[2295]: E0527 03:23:17.585903 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:17.588560 kubelet[2295]: E0527 03:23:17.588535 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:17.817575 kubelet[2295]: E0527 03:23:17.817541 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 03:23:17.917223 kubelet[2295]: I0527 03:23:17.917169 2295 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:23:17.917223 kubelet[2295]: E0527 03:23:17.917231 2295 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:23:17.926919 kubelet[2295]: E0527 03:23:17.926869 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.027934 kubelet[2295]: E0527 03:23:18.027881 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.128925 kubelet[2295]: E0527 03:23:18.128822 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.229309 kubelet[2295]: E0527 03:23:18.229254 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.330402 kubelet[2295]: E0527 03:23:18.330345 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.430872 kubelet[2295]: E0527 03:23:18.430768 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.531535 kubelet[2295]: E0527 03:23:18.531494 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.590820 kubelet[2295]: E0527 03:23:18.590793 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:18.591273 kubelet[2295]: E0527 03:23:18.590947 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:18.591522 kubelet[2295]: E0527 03:23:18.591505 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:23:18.631719 kubelet[2295]: E0527 03:23:18.631642 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:23:18.739479 kubelet[2295]: I0527 03:23:18.739322 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:23:18.745985 kubelet[2295]: E0527 03:23:18.745952 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 03:23:18.745985 kubelet[2295]: I0527 03:23:18.745980 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:23:18.747858 kubelet[2295]: E0527 03:23:18.747835 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 03:23:18.747858 kubelet[2295]: I0527 03:23:18.747853 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:23:18.749278 kubelet[2295]: E0527 03:23:18.749249 2295 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 03:23:19.529697 kubelet[2295]: I0527 03:23:19.529640 2295 apiserver.go:52] "Watching apiserver" May 27 03:23:19.550348 kubelet[2295]: I0527 03:23:19.550297 2295 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:23:19.591639 kubelet[2295]: I0527 03:23:19.591607 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:23:19.592078 kubelet[2295]: I0527 03:23:19.591708 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:23:20.208777 systemd[1]: Reload requested from client PID 2578 ('systemctl') (unit session-7.scope)... May 27 03:23:20.208795 systemd[1]: Reloading... May 27 03:23:20.277229 zram_generator::config[2624]: No configuration found. May 27 03:23:20.373838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:23:20.518037 systemd[1]: Reloading finished in 308 ms. May 27 03:23:20.554448 kubelet[2295]: I0527 03:23:20.554322 2295 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:20.554492 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:20.584675 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:23:20.585000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:20.585066 systemd[1]: kubelet.service: Consumed 730ms CPU time, 130.6M memory peak. May 27 03:23:20.587088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:23:20.813786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:23:20.823523 (kubelet)[2666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:23:20.863751 kubelet[2666]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:20.863751 kubelet[2666]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:23:20.863751 kubelet[2666]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:23:20.864256 kubelet[2666]: I0527 03:23:20.863854 2666 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:23:20.870381 kubelet[2666]: I0527 03:23:20.870271 2666 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:23:20.870381 kubelet[2666]: I0527 03:23:20.870293 2666 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:23:20.870624 kubelet[2666]: I0527 03:23:20.870595 2666 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:23:20.871782 kubelet[2666]: I0527 03:23:20.871754 2666 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 03:23:20.874354 kubelet[2666]: I0527 03:23:20.874128 2666 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:23:20.879820 kubelet[2666]: I0527 03:23:20.879772 2666 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:23:20.886095 kubelet[2666]: I0527 03:23:20.886056 2666 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:23:20.886383 kubelet[2666]: I0527 03:23:20.886328 2666 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:23:20.886550 kubelet[2666]: I0527 03:23:20.886361 2666 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:23:20.886550 kubelet[2666]: I0527 03:23:20.886549 2666 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:23:20.886682 kubelet[2666]: I0527 03:23:20.886561 2666 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:23:20.886682 kubelet[2666]: I0527 03:23:20.886611 2666 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:20.886845 kubelet[2666]: I0527 03:23:20.886812 2666 kubelet.go:480] "Attempting to sync node with API server" May 27 03:23:20.886845 kubelet[2666]: I0527 03:23:20.886832 2666 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:23:20.886936 kubelet[2666]: I0527 03:23:20.886863 2666 kubelet.go:386] "Adding apiserver pod source" May 27 03:23:20.886936 kubelet[2666]: I0527 03:23:20.886886 2666 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:23:20.887921 kubelet[2666]: I0527 03:23:20.887895 2666 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:23:20.888445 kubelet[2666]: I0527 03:23:20.888426 2666 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:23:20.891555 kubelet[2666]: I0527 03:23:20.891526 2666 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:23:20.891622 kubelet[2666]: I0527 03:23:20.891583 2666 server.go:1289] "Started kubelet" May 27 03:23:20.892940 kubelet[2666]: I0527 03:23:20.892893 2666 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:23:20.894156 kubelet[2666]: I0527 03:23:20.893599 2666 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:23:20.894156 kubelet[2666]: I0527 03:23:20.893993 2666 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:23:20.894156 kubelet[2666]: I0527 03:23:20.894139 2666 server.go:317] "Adding debug handlers to kubelet server" May 27 03:23:20.902633 kubelet[2666]: E0527 03:23:20.902590 2666 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:23:20.903811 kubelet[2666]: I0527 03:23:20.903778 2666 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:23:20.905190 kubelet[2666]: I0527 03:23:20.904943 2666 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:23:20.905730 kubelet[2666]: I0527 03:23:20.905698 2666 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:23:20.905912 kubelet[2666]: I0527 03:23:20.905865 2666 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:23:20.906353 kubelet[2666]: I0527 03:23:20.906282 2666 reconciler.go:26] "Reconciler: start to sync state" May 27 03:23:20.909707 kubelet[2666]: I0527 03:23:20.909528 2666 factory.go:223] Registration of the systemd container factory successfully May 27 03:23:20.909969 kubelet[2666]: I0527 03:23:20.909926 2666 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:23:20.911862 kubelet[2666]: I0527 03:23:20.911840 2666 factory.go:223] Registration of the containerd container factory successfully May 27 03:23:20.919919 kubelet[2666]: I0527 03:23:20.919856 2666 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:23:20.921473 kubelet[2666]: I0527 03:23:20.921261 2666 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:23:20.921473 kubelet[2666]: I0527 03:23:20.921280 2666 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:23:20.921473 kubelet[2666]: I0527 03:23:20.921297 2666 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:23:20.921473 kubelet[2666]: I0527 03:23:20.921304 2666 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:23:20.921473 kubelet[2666]: E0527 03:23:20.921345 2666 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:23:20.950934 kubelet[2666]: I0527 03:23:20.950901 2666 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:23:20.950934 kubelet[2666]: I0527 03:23:20.950920 2666 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:23:20.950934 kubelet[2666]: I0527 03:23:20.950937 2666 state_mem.go:36] "Initialized new in-memory state store" May 27 03:23:20.951154 kubelet[2666]: I0527 03:23:20.951068 2666 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:23:20.951154 kubelet[2666]: I0527 03:23:20.951079 2666 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:23:20.951154 kubelet[2666]: I0527 03:23:20.951097 2666 policy_none.go:49] "None policy: Start" May 27 03:23:20.951154 kubelet[2666]: I0527 03:23:20.951108 2666 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:23:20.951154 kubelet[2666]: I0527 03:23:20.951118 2666 state_mem.go:35] "Initializing new in-memory state store" May 27 03:23:20.951289 kubelet[2666]: I0527 03:23:20.951227 2666 state_mem.go:75] "Updated machine memory state" May 27 03:23:20.956440 kubelet[2666]: E0527 03:23:20.956413 2666 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:23:20.956605 kubelet[2666]: I0527 03:23:20.956587 2666 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:23:20.956641 kubelet[2666]: I0527 03:23:20.956606 2666 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:23:20.956814 kubelet[2666]: I0527 03:23:20.956788 2666 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:23:20.957952 kubelet[2666]: E0527 03:23:20.957923 2666 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:23:21.022173 kubelet[2666]: I0527 03:23:21.022132 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:23:21.022399 kubelet[2666]: I0527 03:23:21.022253 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:23:21.022399 kubelet[2666]: I0527 03:23:21.022253 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:23:21.027490 kubelet[2666]: E0527 03:23:21.027415 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:23:21.027490 kubelet[2666]: E0527 03:23:21.027460 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:23:21.060873 kubelet[2666]: I0527 03:23:21.060831 2666 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:23:21.067296 kubelet[2666]: I0527 03:23:21.067140 2666 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:23:21.067296 kubelet[2666]: I0527 03:23:21.067222 2666 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:23:21.108119 kubelet[2666]: I0527 03:23:21.108069 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:21.108119 kubelet[2666]: I0527 03:23:21.108112 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:21.108119 kubelet[2666]: I0527 03:23:21.108130 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:21.108119 kubelet[2666]: I0527 03:23:21.108145 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:21.108404 kubelet[2666]: I0527 03:23:21.108162 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:23:21.108404 kubelet[2666]: I0527 03:23:21.108182 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e07898026fbb39e4af8f144aca1230f6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07898026fbb39e4af8f144aca1230f6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:23:21.108404 kubelet[2666]: I0527 03:23:21.108197 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 03:23:21.108404 kubelet[2666]: I0527 03:23:21.108268 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e07898026fbb39e4af8f144aca1230f6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07898026fbb39e4af8f144aca1230f6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:23:21.108404 kubelet[2666]: I0527 03:23:21.108309 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e07898026fbb39e4af8f144aca1230f6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e07898026fbb39e4af8f144aca1230f6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:23:21.888311 kubelet[2666]: I0527 03:23:21.888259 2666 apiserver.go:52] "Watching apiserver" May 27 03:23:21.935229 kubelet[2666]: I0527 03:23:21.935180 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:23:22.153606 kubelet[2666]: I0527 03:23:22.153473 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:23:22.219947 kubelet[2666]: E0527 03:23:22.219874 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:23:22.238099 kubelet[2666]: E0527 03:23:22.238040 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:23:22.406218 kubelet[2666]: I0527 03:23:22.406088 2666 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:23:22.446662 kubelet[2666]: I0527 03:23:22.446605 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.4465901 podStartE2EDuration="3.4465901s" podCreationTimestamp="2025-05-27 03:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:22.446461625 +0000 UTC m=+1.619010501" watchObservedRunningTime="2025-05-27 03:23:22.4465901 +0000 UTC m=+1.619138976" May 27 03:23:22.590223 kubelet[2666]: I0527 03:23:22.590120 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.590103071 podStartE2EDuration="3.590103071s" podCreationTimestamp="2025-05-27 03:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:22.488125366 +0000 UTC m=+1.660674242" watchObservedRunningTime="2025-05-27 03:23:22.590103071 +0000 UTC m=+1.762651947" May 27 03:23:22.936764 kubelet[2666]: I0527 03:23:22.936729 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:23:22.942227 kubelet[2666]: E0527 03:23:22.942166 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 03:23:27.003289 kubelet[2666]: I0527 03:23:27.003256 2666 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:23:27.004235 containerd[1579]: time="2025-05-27T03:23:27.004179052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:23:27.004511 kubelet[2666]: I0527 03:23:27.004486 2666 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:23:27.916434 kubelet[2666]: I0527 03:23:27.916363 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.916349713 podStartE2EDuration="6.916349713s" podCreationTimestamp="2025-05-27 03:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:22.59006623 +0000 UTC m=+1.762615116" watchObservedRunningTime="2025-05-27 03:23:27.916349713 +0000 UTC m=+7.088898589" May 27 03:23:28.000084 systemd[1]: Created slice kubepods-besteffort-poda07cbb09_0b87_476d_801e_4dda226d290f.slice - libcontainer container kubepods-besteffort-poda07cbb09_0b87_476d_801e_4dda226d290f.slice. May 27 03:23:28.050227 kubelet[2666]: I0527 03:23:28.050161 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a07cbb09-0b87-476d-801e-4dda226d290f-kube-proxy\") pod \"kube-proxy-mbq7w\" (UID: \"a07cbb09-0b87-476d-801e-4dda226d290f\") " pod="kube-system/kube-proxy-mbq7w" May 27 03:23:28.050649 kubelet[2666]: I0527 03:23:28.050196 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a07cbb09-0b87-476d-801e-4dda226d290f-xtables-lock\") pod \"kube-proxy-mbq7w\" (UID: \"a07cbb09-0b87-476d-801e-4dda226d290f\") " pod="kube-system/kube-proxy-mbq7w" May 27 03:23:28.050649 kubelet[2666]: I0527 03:23:28.050281 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a07cbb09-0b87-476d-801e-4dda226d290f-lib-modules\") pod \"kube-proxy-mbq7w\" (UID: \"a07cbb09-0b87-476d-801e-4dda226d290f\") " pod="kube-system/kube-proxy-mbq7w" May 27 03:23:28.050649 kubelet[2666]: I0527 03:23:28.050295 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ngm\" (UniqueName: \"kubernetes.io/projected/a07cbb09-0b87-476d-801e-4dda226d290f-kube-api-access-f6ngm\") pod \"kube-proxy-mbq7w\" (UID: \"a07cbb09-0b87-476d-801e-4dda226d290f\") " pod="kube-system/kube-proxy-mbq7w" May 27 03:23:28.117535 systemd[1]: Created slice kubepods-besteffort-pod0903ded7_6db8_49ac_8ee5_f9ab5c891b1f.slice - libcontainer container kubepods-besteffort-pod0903ded7_6db8_49ac_8ee5_f9ab5c891b1f.slice. May 27 03:23:28.151064 kubelet[2666]: I0527 03:23:28.151010 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjzt\" (UniqueName: \"kubernetes.io/projected/0903ded7-6db8-49ac-8ee5-f9ab5c891b1f-kube-api-access-fcjzt\") pod \"tigera-operator-844669ff44-f8245\" (UID: \"0903ded7-6db8-49ac-8ee5-f9ab5c891b1f\") " pod="tigera-operator/tigera-operator-844669ff44-f8245" May 27 03:23:28.151064 kubelet[2666]: I0527 03:23:28.151063 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0903ded7-6db8-49ac-8ee5-f9ab5c891b1f-var-lib-calico\") pod \"tigera-operator-844669ff44-f8245\" (UID: \"0903ded7-6db8-49ac-8ee5-f9ab5c891b1f\") " pod="tigera-operator/tigera-operator-844669ff44-f8245" May 27 03:23:28.311997 containerd[1579]: time="2025-05-27T03:23:28.311941035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbq7w,Uid:a07cbb09-0b87-476d-801e-4dda226d290f,Namespace:kube-system,Attempt:0,}" May 27 03:23:28.349415 containerd[1579]: time="2025-05-27T03:23:28.349356313Z" level=info msg="connecting to shim 36462745841fc92858d2684e3999c28c549650200f1ce40c8f6d4e932a984c79" address="unix:///run/containerd/s/998017dfee57e1f0a6bdf68f8cce42a1b607602479ce793caaf726a0e305c669" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:28.380408 systemd[1]: Started cri-containerd-36462745841fc92858d2684e3999c28c549650200f1ce40c8f6d4e932a984c79.scope - libcontainer container 36462745841fc92858d2684e3999c28c549650200f1ce40c8f6d4e932a984c79. May 27 03:23:28.410530 containerd[1579]: time="2025-05-27T03:23:28.410465085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbq7w,Uid:a07cbb09-0b87-476d-801e-4dda226d290f,Namespace:kube-system,Attempt:0,} returns sandbox id \"36462745841fc92858d2684e3999c28c549650200f1ce40c8f6d4e932a984c79\"" May 27 03:23:28.417378 containerd[1579]: time="2025-05-27T03:23:28.417336863Z" level=info msg="CreateContainer within sandbox \"36462745841fc92858d2684e3999c28c549650200f1ce40c8f6d4e932a984c79\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:23:28.420870 containerd[1579]: time="2025-05-27T03:23:28.420836407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-f8245,Uid:0903ded7-6db8-49ac-8ee5-f9ab5c891b1f,Namespace:tigera-operator,Attempt:0,}" May 27 03:23:28.435710 containerd[1579]: time="2025-05-27T03:23:28.435676129Z" level=info msg="Container 1df5633f407e23a09c0d62f85696872682aebe9cd541f5f8af037c76b0f0449d: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:28.457508 containerd[1579]: time="2025-05-27T03:23:28.457445229Z" level=info msg="CreateContainer within sandbox \"36462745841fc92858d2684e3999c28c549650200f1ce40c8f6d4e932a984c79\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1df5633f407e23a09c0d62f85696872682aebe9cd541f5f8af037c76b0f0449d\"" May 27 03:23:28.458327 containerd[1579]: time="2025-05-27T03:23:28.458283064Z" level=info msg="StartContainer for \"1df5633f407e23a09c0d62f85696872682aebe9cd541f5f8af037c76b0f0449d\"" May 27 03:23:28.459913 containerd[1579]: time="2025-05-27T03:23:28.459880346Z" level=info msg="connecting to shim 1df5633f407e23a09c0d62f85696872682aebe9cd541f5f8af037c76b0f0449d" address="unix:///run/containerd/s/998017dfee57e1f0a6bdf68f8cce42a1b607602479ce793caaf726a0e305c669" protocol=ttrpc version=3 May 27 03:23:28.468385 containerd[1579]: time="2025-05-27T03:23:28.468327134Z" level=info msg="connecting to shim 222ddad5803b0f54e6ba6b25c284e7262c93d52a3a4f07095da341f922a390ed" address="unix:///run/containerd/s/b297da6fe26d7cb57e7159e8a67b20bfa1c5651be0743d5887ba95b26c8a02da" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:28.485668 systemd[1]: Started cri-containerd-1df5633f407e23a09c0d62f85696872682aebe9cd541f5f8af037c76b0f0449d.scope - libcontainer container 1df5633f407e23a09c0d62f85696872682aebe9cd541f5f8af037c76b0f0449d. May 27 03:23:28.492121 systemd[1]: Started cri-containerd-222ddad5803b0f54e6ba6b25c284e7262c93d52a3a4f07095da341f922a390ed.scope - libcontainer container 222ddad5803b0f54e6ba6b25c284e7262c93d52a3a4f07095da341f922a390ed. May 27 03:23:28.550669 containerd[1579]: time="2025-05-27T03:23:28.550604662Z" level=info msg="StartContainer for \"1df5633f407e23a09c0d62f85696872682aebe9cd541f5f8af037c76b0f0449d\" returns successfully" May 27 03:23:28.551607 containerd[1579]: time="2025-05-27T03:23:28.551577013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-f8245,Uid:0903ded7-6db8-49ac-8ee5-f9ab5c891b1f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"222ddad5803b0f54e6ba6b25c284e7262c93d52a3a4f07095da341f922a390ed\"" May 27 03:23:28.555967 containerd[1579]: time="2025-05-27T03:23:28.555907632Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:23:28.969866 kubelet[2666]: I0527 03:23:28.969789 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mbq7w" podStartSLOduration=1.96977252 podStartE2EDuration="1.96977252s" podCreationTimestamp="2025-05-27 03:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:23:28.969315731 +0000 UTC m=+8.141864607" watchObservedRunningTime="2025-05-27 03:23:28.96977252 +0000 UTC m=+8.142321396" May 27 03:23:29.982022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439529896.mount: Deactivated successfully. May 27 03:23:31.711913 containerd[1579]: time="2025-05-27T03:23:31.711827180Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:31.713928 containerd[1579]: time="2025-05-27T03:23:31.713880301Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:23:31.715893 containerd[1579]: time="2025-05-27T03:23:31.715826097Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:31.719837 containerd[1579]: time="2025-05-27T03:23:31.719788382Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:31.720399 containerd[1579]: time="2025-05-27T03:23:31.720361902Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 3.164406679s" May 27 03:23:31.720399 containerd[1579]: time="2025-05-27T03:23:31.720397018Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:23:31.725881 containerd[1579]: time="2025-05-27T03:23:31.725841590Z" level=info msg="CreateContainer within sandbox \"222ddad5803b0f54e6ba6b25c284e7262c93d52a3a4f07095da341f922a390ed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:23:31.738850 containerd[1579]: time="2025-05-27T03:23:31.738790033Z" level=info msg="Container b8d1edb39adbf091848f16d2de317c3f5ce66da93313980db76f72a94c536115: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:31.751555 containerd[1579]: time="2025-05-27T03:23:31.751497109Z" level=info msg="CreateContainer within sandbox \"222ddad5803b0f54e6ba6b25c284e7262c93d52a3a4f07095da341f922a390ed\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b8d1edb39adbf091848f16d2de317c3f5ce66da93313980db76f72a94c536115\"" May 27 03:23:31.752149 containerd[1579]: time="2025-05-27T03:23:31.752102118Z" level=info msg="StartContainer for \"b8d1edb39adbf091848f16d2de317c3f5ce66da93313980db76f72a94c536115\"" May 27 03:23:31.753368 containerd[1579]: time="2025-05-27T03:23:31.753336112Z" level=info msg="connecting to shim b8d1edb39adbf091848f16d2de317c3f5ce66da93313980db76f72a94c536115" address="unix:///run/containerd/s/b297da6fe26d7cb57e7159e8a67b20bfa1c5651be0743d5887ba95b26c8a02da" protocol=ttrpc version=3 May 27 03:23:31.809396 systemd[1]: Started cri-containerd-b8d1edb39adbf091848f16d2de317c3f5ce66da93313980db76f72a94c536115.scope - libcontainer container b8d1edb39adbf091848f16d2de317c3f5ce66da93313980db76f72a94c536115. May 27 03:23:31.867383 containerd[1579]: time="2025-05-27T03:23:31.867327094Z" level=info msg="StartContainer for \"b8d1edb39adbf091848f16d2de317c3f5ce66da93313980db76f72a94c536115\" returns successfully" May 27 03:23:31.967226 kubelet[2666]: I0527 03:23:31.967027 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-f8245" podStartSLOduration=0.799592308 podStartE2EDuration="3.967006602s" podCreationTimestamp="2025-05-27 03:23:28 +0000 UTC" firstStartedPulling="2025-05-27 03:23:28.553774278 +0000 UTC m=+7.726323154" lastFinishedPulling="2025-05-27 03:23:31.721188572 +0000 UTC m=+10.893737448" observedRunningTime="2025-05-27 03:23:31.96678395 +0000 UTC m=+11.139332826" watchObservedRunningTime="2025-05-27 03:23:31.967006602 +0000 UTC m=+11.139555468" May 27 03:23:35.308363 update_engine[1560]: I20250527 03:23:35.308269 1560 update_attempter.cc:509] Updating boot flags... May 27 03:23:37.225735 sudo[1772]: pam_unix(sudo:session): session closed for user root May 27 03:23:37.230298 sshd[1771]: Connection closed by 10.0.0.1 port 50010 May 27 03:23:37.228278 sshd-session[1769]: pam_unix(sshd:session): session closed for user core May 27 03:23:37.235429 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:50010.service: Deactivated successfully. May 27 03:23:37.239381 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:23:37.239792 systemd[1]: session-7.scope: Consumed 5.375s CPU time, 224.5M memory peak. May 27 03:23:37.242401 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. May 27 03:23:37.245592 systemd-logind[1554]: Removed session 7. May 27 03:23:40.877184 systemd[1]: Created slice kubepods-besteffort-pod96e36dda_6e23_48e0_a76a_a443be4e3d4d.slice - libcontainer container kubepods-besteffort-pod96e36dda_6e23_48e0_a76a_a443be4e3d4d.slice. May 27 03:23:40.930879 kubelet[2666]: I0527 03:23:40.930812 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/96e36dda-6e23-48e0-a76a-a443be4e3d4d-typha-certs\") pod \"calico-typha-7cbcd979fd-krclz\" (UID: \"96e36dda-6e23-48e0-a76a-a443be4e3d4d\") " pod="calico-system/calico-typha-7cbcd979fd-krclz" May 27 03:23:40.930879 kubelet[2666]: I0527 03:23:40.930871 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxkh\" (UniqueName: \"kubernetes.io/projected/96e36dda-6e23-48e0-a76a-a443be4e3d4d-kube-api-access-kmxkh\") pod \"calico-typha-7cbcd979fd-krclz\" (UID: \"96e36dda-6e23-48e0-a76a-a443be4e3d4d\") " pod="calico-system/calico-typha-7cbcd979fd-krclz" May 27 03:23:40.930879 kubelet[2666]: I0527 03:23:40.930898 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e36dda-6e23-48e0-a76a-a443be4e3d4d-tigera-ca-bundle\") pod \"calico-typha-7cbcd979fd-krclz\" (UID: \"96e36dda-6e23-48e0-a76a-a443be4e3d4d\") " pod="calico-system/calico-typha-7cbcd979fd-krclz" May 27 03:23:41.086704 systemd[1]: Created slice kubepods-besteffort-pod0169c816_f329_4b7c_9f93_f19abbb3aa43.slice - libcontainer container kubepods-besteffort-pod0169c816_f329_4b7c_9f93_f19abbb3aa43.slice. May 27 03:23:41.133163 kubelet[2666]: I0527 03:23:41.132888 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-cni-net-dir\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133163 kubelet[2666]: I0527 03:23:41.132945 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-var-run-calico\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133163 kubelet[2666]: I0527 03:23:41.132970 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-flexvol-driver-host\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133163 kubelet[2666]: I0527 03:23:41.132996 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0169c816-f329-4b7c-9f93-f19abbb3aa43-tigera-ca-bundle\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133163 kubelet[2666]: I0527 03:23:41.133020 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-var-lib-calico\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133614 kubelet[2666]: I0527 03:23:41.133124 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-cni-log-dir\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133614 kubelet[2666]: I0527 03:23:41.133303 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-lib-modules\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133614 kubelet[2666]: I0527 03:23:41.133328 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpl2\" (UniqueName: \"kubernetes.io/projected/0169c816-f329-4b7c-9f93-f19abbb3aa43-kube-api-access-jgpl2\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133614 kubelet[2666]: I0527 03:23:41.133358 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-xtables-lock\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133614 kubelet[2666]: I0527 03:23:41.133386 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-cni-bin-dir\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133829 kubelet[2666]: I0527 03:23:41.133407 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0169c816-f329-4b7c-9f93-f19abbb3aa43-node-certs\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.133829 kubelet[2666]: I0527 03:23:41.133432 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0169c816-f329-4b7c-9f93-f19abbb3aa43-policysync\") pod \"calico-node-rspc2\" (UID: \"0169c816-f329-4b7c-9f93-f19abbb3aa43\") " pod="calico-system/calico-node-rspc2" May 27 03:23:41.182387 containerd[1579]: time="2025-05-27T03:23:41.182332746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cbcd979fd-krclz,Uid:96e36dda-6e23-48e0-a76a-a443be4e3d4d,Namespace:calico-system,Attempt:0,}" May 27 03:23:41.218427 kubelet[2666]: E0527 03:23:41.218357 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:23:41.234550 kubelet[2666]: I0527 03:23:41.234503 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/57dcdeb3-a1bc-43ee-a490-a9f0e2678204-socket-dir\") pod \"csi-node-driver-m8zk9\" (UID: \"57dcdeb3-a1bc-43ee-a490-a9f0e2678204\") " pod="calico-system/csi-node-driver-m8zk9" May 27 03:23:41.234676 kubelet[2666]: I0527 03:23:41.234563 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/57dcdeb3-a1bc-43ee-a490-a9f0e2678204-varrun\") pod \"csi-node-driver-m8zk9\" (UID: \"57dcdeb3-a1bc-43ee-a490-a9f0e2678204\") " pod="calico-system/csi-node-driver-m8zk9" May 27 03:23:41.234676 kubelet[2666]: I0527 03:23:41.234649 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qchs\" (UniqueName: \"kubernetes.io/projected/57dcdeb3-a1bc-43ee-a490-a9f0e2678204-kube-api-access-5qchs\") pod \"csi-node-driver-m8zk9\" (UID: \"57dcdeb3-a1bc-43ee-a490-a9f0e2678204\") " pod="calico-system/csi-node-driver-m8zk9" May 27 03:23:41.234750 kubelet[2666]: I0527 03:23:41.234715 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57dcdeb3-a1bc-43ee-a490-a9f0e2678204-kubelet-dir\") pod \"csi-node-driver-m8zk9\" (UID: \"57dcdeb3-a1bc-43ee-a490-a9f0e2678204\") " pod="calico-system/csi-node-driver-m8zk9" May 27 03:23:41.234780 kubelet[2666]: I0527 03:23:41.234770 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/57dcdeb3-a1bc-43ee-a490-a9f0e2678204-registration-dir\") pod \"csi-node-driver-m8zk9\" (UID: \"57dcdeb3-a1bc-43ee-a490-a9f0e2678204\") " pod="calico-system/csi-node-driver-m8zk9" May 27 03:23:41.335794 kubelet[2666]: E0527 03:23:41.335743 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.335794 kubelet[2666]: W0527 03:23:41.335773 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.337273 kubelet[2666]: E0527 03:23:41.337233 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.337530 kubelet[2666]: E0527 03:23:41.337508 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.337530 kubelet[2666]: W0527 03:23:41.337524 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.337600 kubelet[2666]: E0527 03:23:41.337534 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.337968 kubelet[2666]: E0527 03:23:41.337927 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.338011 kubelet[2666]: W0527 03:23:41.337967 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.338011 kubelet[2666]: E0527 03:23:41.337996 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.338259 kubelet[2666]: E0527 03:23:41.338240 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.338259 kubelet[2666]: W0527 03:23:41.338256 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.338319 kubelet[2666]: E0527 03:23:41.338267 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.338513 kubelet[2666]: E0527 03:23:41.338494 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.338545 kubelet[2666]: W0527 03:23:41.338517 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.338545 kubelet[2666]: E0527 03:23:41.338529 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.338777 kubelet[2666]: E0527 03:23:41.338759 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.338777 kubelet[2666]: W0527 03:23:41.338772 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.338853 kubelet[2666]: E0527 03:23:41.338783 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.339156 kubelet[2666]: E0527 03:23:41.339028 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.339156 kubelet[2666]: W0527 03:23:41.339042 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.339156 kubelet[2666]: E0527 03:23:41.339053 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.339410 kubelet[2666]: E0527 03:23:41.339358 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.339410 kubelet[2666]: W0527 03:23:41.339376 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.339410 kubelet[2666]: E0527 03:23:41.339387 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.339692 kubelet[2666]: E0527 03:23:41.339553 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.339692 kubelet[2666]: W0527 03:23:41.339560 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.339692 kubelet[2666]: E0527 03:23:41.339567 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.339974 kubelet[2666]: E0527 03:23:41.339713 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.339974 kubelet[2666]: W0527 03:23:41.339720 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.339974 kubelet[2666]: E0527 03:23:41.339728 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.343356 kubelet[2666]: E0527 03:23:41.343339 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.343356 kubelet[2666]: W0527 03:23:41.343352 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.343479 kubelet[2666]: E0527 03:23:41.343363 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.343661 kubelet[2666]: E0527 03:23:41.343631 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.343661 kubelet[2666]: W0527 03:23:41.343648 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.343661 kubelet[2666]: E0527 03:23:41.343661 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.343873 kubelet[2666]: E0527 03:23:41.343855 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.343873 kubelet[2666]: W0527 03:23:41.343864 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.343873 kubelet[2666]: E0527 03:23:41.343871 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.344056 kubelet[2666]: E0527 03:23:41.344043 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.344056 kubelet[2666]: W0527 03:23:41.344051 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.344056 kubelet[2666]: E0527 03:23:41.344059 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.344225 kubelet[2666]: E0527 03:23:41.344213 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.344225 kubelet[2666]: W0527 03:23:41.344222 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.344278 kubelet[2666]: E0527 03:23:41.344229 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.344397 kubelet[2666]: E0527 03:23:41.344385 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.344423 kubelet[2666]: W0527 03:23:41.344397 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.344423 kubelet[2666]: E0527 03:23:41.344407 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.344570 kubelet[2666]: E0527 03:23:41.344558 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.344570 kubelet[2666]: W0527 03:23:41.344566 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.344630 kubelet[2666]: E0527 03:23:41.344573 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.344792 kubelet[2666]: E0527 03:23:41.344774 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.344792 kubelet[2666]: W0527 03:23:41.344787 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.344855 kubelet[2666]: E0527 03:23:41.344797 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.344992 kubelet[2666]: E0527 03:23:41.344973 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.344992 kubelet[2666]: W0527 03:23:41.344985 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.345036 kubelet[2666]: E0527 03:23:41.344993 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.345196 kubelet[2666]: E0527 03:23:41.345184 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.345254 kubelet[2666]: W0527 03:23:41.345194 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.345254 kubelet[2666]: E0527 03:23:41.345221 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.345409 kubelet[2666]: E0527 03:23:41.345395 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.345409 kubelet[2666]: W0527 03:23:41.345406 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.345476 kubelet[2666]: E0527 03:23:41.345414 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.345604 kubelet[2666]: E0527 03:23:41.345592 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.345604 kubelet[2666]: W0527 03:23:41.345601 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.345649 kubelet[2666]: E0527 03:23:41.345609 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.345811 kubelet[2666]: E0527 03:23:41.345800 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.345811 kubelet[2666]: W0527 03:23:41.345809 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.345850 kubelet[2666]: E0527 03:23:41.345816 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.346005 kubelet[2666]: E0527 03:23:41.345994 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.346005 kubelet[2666]: W0527 03:23:41.346003 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.346053 kubelet[2666]: E0527 03:23:41.346010 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.346227 kubelet[2666]: E0527 03:23:41.346191 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.346227 kubelet[2666]: W0527 03:23:41.346219 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.346267 kubelet[2666]: E0527 03:23:41.346229 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.420976 kubelet[2666]: E0527 03:23:41.420793 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.420976 kubelet[2666]: W0527 03:23:41.420823 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.420976 kubelet[2666]: E0527 03:23:41.420844 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.430843 kubelet[2666]: E0527 03:23:41.430814 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:41.430843 kubelet[2666]: W0527 03:23:41.430833 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:41.430843 kubelet[2666]: E0527 03:23:41.430856 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:41.452003 containerd[1579]: time="2025-05-27T03:23:41.451953528Z" level=info msg="connecting to shim 5b7b3d1c4f1502ca643ab66827f4f79e95fe740fbeb117cda39aa9b201cdd47f" address="unix:///run/containerd/s/9d4606eec907a891eed9338a46d4cd74672e0df8250a465911707dd8138e9bdd" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:41.481365 systemd[1]: Started cri-containerd-5b7b3d1c4f1502ca643ab66827f4f79e95fe740fbeb117cda39aa9b201cdd47f.scope - libcontainer container 5b7b3d1c4f1502ca643ab66827f4f79e95fe740fbeb117cda39aa9b201cdd47f. May 27 03:23:41.672963 containerd[1579]: time="2025-05-27T03:23:41.672783785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cbcd979fd-krclz,Uid:96e36dda-6e23-48e0-a76a-a443be4e3d4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b7b3d1c4f1502ca643ab66827f4f79e95fe740fbeb117cda39aa9b201cdd47f\"" May 27 03:23:41.674815 containerd[1579]: time="2025-05-27T03:23:41.674780695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:23:41.693814 containerd[1579]: time="2025-05-27T03:23:41.693754467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rspc2,Uid:0169c816-f329-4b7c-9f93-f19abbb3aa43,Namespace:calico-system,Attempt:0,}" May 27 03:23:41.928238 containerd[1579]: time="2025-05-27T03:23:41.927797961Z" level=info msg="connecting to shim 20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596" address="unix:///run/containerd/s/68697300d6e2f38c593bc65073b26c999d9cd2cfab83bc166420769d7854038b" namespace=k8s.io protocol=ttrpc version=3 May 27 03:23:41.963509 systemd[1]: Started cri-containerd-20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596.scope - libcontainer container 20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596. May 27 03:23:42.042462 containerd[1579]: time="2025-05-27T03:23:42.042278413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rspc2,Uid:0169c816-f329-4b7c-9f93-f19abbb3aa43,Namespace:calico-system,Attempt:0,} returns sandbox id \"20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596\"" May 27 03:23:42.922277 kubelet[2666]: E0527 03:23:42.922157 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:23:43.522955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997463921.mount: Deactivated successfully. May 27 03:23:44.068571 containerd[1579]: time="2025-05-27T03:23:44.068507442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:44.069431 containerd[1579]: time="2025-05-27T03:23:44.069388003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:23:44.070765 containerd[1579]: time="2025-05-27T03:23:44.070720616Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:44.072916 containerd[1579]: time="2025-05-27T03:23:44.072866071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:44.073495 containerd[1579]: time="2025-05-27T03:23:44.073447738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.398631525s" May 27 03:23:44.073495 containerd[1579]: time="2025-05-27T03:23:44.073492673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:23:44.074931 containerd[1579]: time="2025-05-27T03:23:44.074901068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:23:44.087488 containerd[1579]: time="2025-05-27T03:23:44.087437159Z" level=info msg="CreateContainer within sandbox \"5b7b3d1c4f1502ca643ab66827f4f79e95fe740fbeb117cda39aa9b201cdd47f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:23:44.096900 containerd[1579]: time="2025-05-27T03:23:44.096836113Z" level=info msg="Container 2cce269c7a0070ebf7012f5f8cb4961f4a8579faa08b7703c1d87ed86373bb7e: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:44.105632 containerd[1579]: time="2025-05-27T03:23:44.105563560Z" level=info msg="CreateContainer within sandbox \"5b7b3d1c4f1502ca643ab66827f4f79e95fe740fbeb117cda39aa9b201cdd47f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2cce269c7a0070ebf7012f5f8cb4961f4a8579faa08b7703c1d87ed86373bb7e\"" May 27 03:23:44.106270 containerd[1579]: time="2025-05-27T03:23:44.106224617Z" level=info msg="StartContainer for \"2cce269c7a0070ebf7012f5f8cb4961f4a8579faa08b7703c1d87ed86373bb7e\"" May 27 03:23:44.107386 containerd[1579]: time="2025-05-27T03:23:44.107360178Z" level=info msg="connecting to shim 2cce269c7a0070ebf7012f5f8cb4961f4a8579faa08b7703c1d87ed86373bb7e" address="unix:///run/containerd/s/9d4606eec907a891eed9338a46d4cd74672e0df8250a465911707dd8138e9bdd" protocol=ttrpc version=3 May 27 03:23:44.130403 systemd[1]: Started cri-containerd-2cce269c7a0070ebf7012f5f8cb4961f4a8579faa08b7703c1d87ed86373bb7e.scope - libcontainer container 2cce269c7a0070ebf7012f5f8cb4961f4a8579faa08b7703c1d87ed86373bb7e. May 27 03:23:44.443352 containerd[1579]: time="2025-05-27T03:23:44.443122548Z" level=info msg="StartContainer for \"2cce269c7a0070ebf7012f5f8cb4961f4a8579faa08b7703c1d87ed86373bb7e\" returns successfully" May 27 03:23:44.922753 kubelet[2666]: E0527 03:23:44.922662 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:23:45.013300 kubelet[2666]: I0527 03:23:45.013228 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cbcd979fd-krclz" podStartSLOduration=2.613135037 podStartE2EDuration="5.013197512s" podCreationTimestamp="2025-05-27 03:23:40 +0000 UTC" firstStartedPulling="2025-05-27 03:23:41.67424248 +0000 UTC m=+20.846791366" lastFinishedPulling="2025-05-27 03:23:44.074304965 +0000 UTC m=+23.246853841" observedRunningTime="2025-05-27 03:23:45.012466314 +0000 UTC m=+24.185015200" watchObservedRunningTime="2025-05-27 03:23:45.013197512 +0000 UTC m=+24.185746388" May 27 03:23:45.056195 kubelet[2666]: E0527 03:23:45.056128 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.056195 kubelet[2666]: W0527 03:23:45.056166 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.056195 kubelet[2666]: E0527 03:23:45.056192 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.056638 kubelet[2666]: E0527 03:23:45.056592 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.056638 kubelet[2666]: W0527 03:23:45.056621 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.056638 kubelet[2666]: E0527 03:23:45.056647 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.056882 kubelet[2666]: E0527 03:23:45.056842 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.056882 kubelet[2666]: W0527 03:23:45.056852 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.056882 kubelet[2666]: E0527 03:23:45.056862 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.057158 kubelet[2666]: E0527 03:23:45.057122 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.057158 kubelet[2666]: W0527 03:23:45.057138 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.057158 kubelet[2666]: E0527 03:23:45.057150 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.057442 kubelet[2666]: E0527 03:23:45.057414 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.057442 kubelet[2666]: W0527 03:23:45.057428 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.057442 kubelet[2666]: E0527 03:23:45.057437 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.057681 kubelet[2666]: E0527 03:23:45.057661 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.057681 kubelet[2666]: W0527 03:23:45.057675 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.057760 kubelet[2666]: E0527 03:23:45.057684 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.057904 kubelet[2666]: E0527 03:23:45.057885 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.057904 kubelet[2666]: W0527 03:23:45.057899 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.057992 kubelet[2666]: E0527 03:23:45.057911 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.058130 kubelet[2666]: E0527 03:23:45.058111 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.058130 kubelet[2666]: W0527 03:23:45.058125 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.058216 kubelet[2666]: E0527 03:23:45.058134 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.058389 kubelet[2666]: E0527 03:23:45.058369 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.058389 kubelet[2666]: W0527 03:23:45.058383 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.058465 kubelet[2666]: E0527 03:23:45.058392 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.058604 kubelet[2666]: E0527 03:23:45.058586 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.058604 kubelet[2666]: W0527 03:23:45.058600 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.058678 kubelet[2666]: E0527 03:23:45.058611 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.058817 kubelet[2666]: E0527 03:23:45.058798 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.058817 kubelet[2666]: W0527 03:23:45.058810 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.058886 kubelet[2666]: E0527 03:23:45.058820 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.059035 kubelet[2666]: E0527 03:23:45.059018 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.059035 kubelet[2666]: W0527 03:23:45.059028 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.059035 kubelet[2666]: E0527 03:23:45.059036 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.059265 kubelet[2666]: E0527 03:23:45.059248 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.059265 kubelet[2666]: W0527 03:23:45.059259 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.059265 kubelet[2666]: E0527 03:23:45.059268 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.059456 kubelet[2666]: E0527 03:23:45.059440 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.059456 kubelet[2666]: W0527 03:23:45.059449 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.059456 kubelet[2666]: E0527 03:23:45.059457 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.059671 kubelet[2666]: E0527 03:23:45.059651 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.059671 kubelet[2666]: W0527 03:23:45.059664 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.059671 kubelet[2666]: E0527 03:23:45.059672 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.068251 kubelet[2666]: E0527 03:23:45.068192 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.068251 kubelet[2666]: W0527 03:23:45.068234 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.068251 kubelet[2666]: E0527 03:23:45.068257 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.068587 kubelet[2666]: E0527 03:23:45.068544 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.068587 kubelet[2666]: W0527 03:23:45.068557 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.068587 kubelet[2666]: E0527 03:23:45.068568 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.068925 kubelet[2666]: E0527 03:23:45.068888 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.068925 kubelet[2666]: W0527 03:23:45.068909 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.068925 kubelet[2666]: E0527 03:23:45.068923 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.069424 kubelet[2666]: E0527 03:23:45.069407 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.069424 kubelet[2666]: W0527 03:23:45.069420 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.069510 kubelet[2666]: E0527 03:23:45.069432 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.069658 kubelet[2666]: E0527 03:23:45.069639 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.069658 kubelet[2666]: W0527 03:23:45.069652 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.069749 kubelet[2666]: E0527 03:23:45.069665 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.069908 kubelet[2666]: E0527 03:23:45.069890 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.069908 kubelet[2666]: W0527 03:23:45.069903 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.069966 kubelet[2666]: E0527 03:23:45.069914 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.070111 kubelet[2666]: E0527 03:23:45.070099 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.070111 kubelet[2666]: W0527 03:23:45.070109 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.070171 kubelet[2666]: E0527 03:23:45.070117 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.070316 kubelet[2666]: E0527 03:23:45.070304 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.070316 kubelet[2666]: W0527 03:23:45.070314 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.070384 kubelet[2666]: E0527 03:23:45.070334 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.070537 kubelet[2666]: E0527 03:23:45.070525 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.070537 kubelet[2666]: W0527 03:23:45.070534 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.070594 kubelet[2666]: E0527 03:23:45.070543 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.070904 kubelet[2666]: E0527 03:23:45.070878 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.070904 kubelet[2666]: W0527 03:23:45.070897 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.070951 kubelet[2666]: E0527 03:23:45.070910 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.071177 kubelet[2666]: E0527 03:23:45.071138 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.071177 kubelet[2666]: W0527 03:23:45.071152 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.071177 kubelet[2666]: E0527 03:23:45.071162 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.071570 kubelet[2666]: E0527 03:23:45.071395 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.071570 kubelet[2666]: W0527 03:23:45.071403 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.071570 kubelet[2666]: E0527 03:23:45.071411 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.071670 kubelet[2666]: E0527 03:23:45.071596 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.071670 kubelet[2666]: W0527 03:23:45.071606 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.071670 kubelet[2666]: E0527 03:23:45.071616 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.071817 kubelet[2666]: E0527 03:23:45.071787 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.071817 kubelet[2666]: W0527 03:23:45.071800 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.071817 kubelet[2666]: E0527 03:23:45.071808 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.072124 kubelet[2666]: E0527 03:23:45.072083 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.072183 kubelet[2666]: W0527 03:23:45.072121 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.072183 kubelet[2666]: E0527 03:23:45.072156 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.072575 kubelet[2666]: E0527 03:23:45.072537 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.072575 kubelet[2666]: W0527 03:23:45.072554 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.072575 kubelet[2666]: E0527 03:23:45.072564 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.072857 kubelet[2666]: E0527 03:23:45.072826 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.072857 kubelet[2666]: W0527 03:23:45.072842 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.072857 kubelet[2666]: E0527 03:23:45.072851 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.073294 kubelet[2666]: E0527 03:23:45.073273 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:45.073294 kubelet[2666]: W0527 03:23:45.073287 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:45.073294 kubelet[2666]: E0527 03:23:45.073296 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:45.992986 kubelet[2666]: I0527 03:23:45.992942 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:23:46.067828 kubelet[2666]: E0527 03:23:46.067778 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.067828 kubelet[2666]: W0527 03:23:46.067812 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.067828 kubelet[2666]: E0527 03:23:46.067839 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.068063 kubelet[2666]: E0527 03:23:46.068054 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.068096 kubelet[2666]: W0527 03:23:46.068064 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.068096 kubelet[2666]: E0527 03:23:46.068074 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.068369 kubelet[2666]: E0527 03:23:46.068268 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.068369 kubelet[2666]: W0527 03:23:46.068284 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.068369 kubelet[2666]: E0527 03:23:46.068293 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.068599 kubelet[2666]: E0527 03:23:46.068575 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.068599 kubelet[2666]: W0527 03:23:46.068594 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.068688 kubelet[2666]: E0527 03:23:46.068605 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.068924 kubelet[2666]: E0527 03:23:46.068855 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.068924 kubelet[2666]: W0527 03:23:46.068873 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.068924 kubelet[2666]: E0527 03:23:46.068895 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.069247 kubelet[2666]: E0527 03:23:46.069227 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.069247 kubelet[2666]: W0527 03:23:46.069242 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.069329 kubelet[2666]: E0527 03:23:46.069254 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.069467 kubelet[2666]: E0527 03:23:46.069443 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.069509 kubelet[2666]: W0527 03:23:46.069486 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.069509 kubelet[2666]: E0527 03:23:46.069500 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.069743 kubelet[2666]: E0527 03:23:46.069710 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.069743 kubelet[2666]: W0527 03:23:46.069739 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.069810 kubelet[2666]: E0527 03:23:46.069750 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.069950 kubelet[2666]: E0527 03:23:46.069925 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.069950 kubelet[2666]: W0527 03:23:46.069937 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.069950 kubelet[2666]: E0527 03:23:46.069946 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.070187 kubelet[2666]: E0527 03:23:46.070140 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.070187 kubelet[2666]: W0527 03:23:46.070154 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.070187 kubelet[2666]: E0527 03:23:46.070164 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.070409 kubelet[2666]: E0527 03:23:46.070382 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.070409 kubelet[2666]: W0527 03:23:46.070395 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.070409 kubelet[2666]: E0527 03:23:46.070405 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.070599 kubelet[2666]: E0527 03:23:46.070569 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.070599 kubelet[2666]: W0527 03:23:46.070588 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.070599 kubelet[2666]: E0527 03:23:46.070598 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.070808 kubelet[2666]: E0527 03:23:46.070785 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.070808 kubelet[2666]: W0527 03:23:46.070803 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.070874 kubelet[2666]: E0527 03:23:46.070813 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.071038 kubelet[2666]: E0527 03:23:46.071010 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.071038 kubelet[2666]: W0527 03:23:46.071024 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.071038 kubelet[2666]: E0527 03:23:46.071035 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.071434 kubelet[2666]: E0527 03:23:46.071382 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.071434 kubelet[2666]: W0527 03:23:46.071395 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.071434 kubelet[2666]: E0527 03:23:46.071406 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.076904 kubelet[2666]: E0527 03:23:46.076878 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.076991 kubelet[2666]: W0527 03:23:46.076932 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.076991 kubelet[2666]: E0527 03:23:46.076947 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.077386 kubelet[2666]: E0527 03:23:46.077368 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.077386 kubelet[2666]: W0527 03:23:46.077382 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.077455 kubelet[2666]: E0527 03:23:46.077396 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.077882 kubelet[2666]: E0527 03:23:46.077846 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.078068 kubelet[2666]: W0527 03:23:46.078040 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.078123 kubelet[2666]: E0527 03:23:46.078079 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.078465 kubelet[2666]: E0527 03:23:46.078441 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.078465 kubelet[2666]: W0527 03:23:46.078460 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.078637 kubelet[2666]: E0527 03:23:46.078473 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.078808 kubelet[2666]: E0527 03:23:46.078783 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.078808 kubelet[2666]: W0527 03:23:46.078798 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.078808 kubelet[2666]: E0527 03:23:46.078808 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.079035 kubelet[2666]: E0527 03:23:46.079020 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.079035 kubelet[2666]: W0527 03:23:46.079033 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.079236 kubelet[2666]: E0527 03:23:46.079044 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.079369 kubelet[2666]: E0527 03:23:46.079349 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.079369 kubelet[2666]: W0527 03:23:46.079365 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.079463 kubelet[2666]: E0527 03:23:46.079378 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.079619 kubelet[2666]: E0527 03:23:46.079601 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.079619 kubelet[2666]: W0527 03:23:46.079615 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.079699 kubelet[2666]: E0527 03:23:46.079625 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.079871 kubelet[2666]: E0527 03:23:46.079851 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.079871 kubelet[2666]: W0527 03:23:46.079865 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.079974 kubelet[2666]: E0527 03:23:46.079881 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.080143 kubelet[2666]: E0527 03:23:46.080126 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.080143 kubelet[2666]: W0527 03:23:46.080139 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.080230 kubelet[2666]: E0527 03:23:46.080150 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.080533 kubelet[2666]: E0527 03:23:46.080443 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.080533 kubelet[2666]: W0527 03:23:46.080467 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.080533 kubelet[2666]: E0527 03:23:46.080480 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.080768 kubelet[2666]: E0527 03:23:46.080750 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.080768 kubelet[2666]: W0527 03:23:46.080764 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.080835 kubelet[2666]: E0527 03:23:46.080775 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.081014 kubelet[2666]: E0527 03:23:46.080992 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.081014 kubelet[2666]: W0527 03:23:46.081007 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.081101 kubelet[2666]: E0527 03:23:46.081020 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.081267 kubelet[2666]: E0527 03:23:46.081249 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.081267 kubelet[2666]: W0527 03:23:46.081262 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.081366 kubelet[2666]: E0527 03:23:46.081272 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.081590 kubelet[2666]: E0527 03:23:46.081570 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.081590 kubelet[2666]: W0527 03:23:46.081585 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.081662 kubelet[2666]: E0527 03:23:46.081597 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.081819 kubelet[2666]: E0527 03:23:46.081802 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.081819 kubelet[2666]: W0527 03:23:46.081814 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.081882 kubelet[2666]: E0527 03:23:46.081823 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.082135 kubelet[2666]: E0527 03:23:46.082104 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.082135 kubelet[2666]: W0527 03:23:46.082118 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.082135 kubelet[2666]: E0527 03:23:46.082128 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.082371 kubelet[2666]: E0527 03:23:46.082353 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:23:46.082371 kubelet[2666]: W0527 03:23:46.082365 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:23:46.082442 kubelet[2666]: E0527 03:23:46.082373 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:23:46.094318 containerd[1579]: time="2025-05-27T03:23:46.093526392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:46.094739 containerd[1579]: time="2025-05-27T03:23:46.094484587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:23:46.096448 containerd[1579]: time="2025-05-27T03:23:46.096386080Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:46.098838 containerd[1579]: time="2025-05-27T03:23:46.098760596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:46.099577 containerd[1579]: time="2025-05-27T03:23:46.099530206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.024584711s" May 27 03:23:46.099625 containerd[1579]: time="2025-05-27T03:23:46.099570271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:23:46.105415 containerd[1579]: time="2025-05-27T03:23:46.105275632Z" level=info msg="CreateContainer within sandbox \"20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:23:46.118801 containerd[1579]: time="2025-05-27T03:23:46.118757108Z" level=info msg="Container 747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:46.129232 containerd[1579]: time="2025-05-27T03:23:46.129179179Z" level=info msg="CreateContainer within sandbox \"20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb\"" May 27 03:23:46.129887 containerd[1579]: time="2025-05-27T03:23:46.129712242Z" level=info msg="StartContainer for \"747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb\"" May 27 03:23:46.131379 containerd[1579]: time="2025-05-27T03:23:46.131348205Z" level=info msg="connecting to shim 747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb" address="unix:///run/containerd/s/68697300d6e2f38c593bc65073b26c999d9cd2cfab83bc166420769d7854038b" protocol=ttrpc version=3 May 27 03:23:46.157451 systemd[1]: Started cri-containerd-747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb.scope - libcontainer container 747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb. May 27 03:23:46.220875 containerd[1579]: time="2025-05-27T03:23:46.220747307Z" level=info msg="StartContainer for \"747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb\" returns successfully" May 27 03:23:46.226621 systemd[1]: cri-containerd-747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb.scope: Deactivated successfully. May 27 03:23:46.229796 containerd[1579]: time="2025-05-27T03:23:46.229754981Z" level=info msg="received exit event container_id:\"747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb\" id:\"747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb\" pid:3363 exited_at:{seconds:1748316226 nanos:229341462}" May 27 03:23:46.229905 containerd[1579]: time="2025-05-27T03:23:46.229836576Z" level=info msg="TaskExit event in podsandbox handler container_id:\"747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb\" id:\"747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb\" pid:3363 exited_at:{seconds:1748316226 nanos:229341462}" May 27 03:23:46.263764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-747097fbda9f4f6b876ff62dcc4fa6e1fd5a7f869c9908ad96e6a85ff4c325cb-rootfs.mount: Deactivated successfully. May 27 03:23:46.922098 kubelet[2666]: E0527 03:23:46.922012 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:23:48.003093 containerd[1579]: time="2025-05-27T03:23:48.002846985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:23:48.922577 kubelet[2666]: E0527 03:23:48.922503 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:23:50.922340 kubelet[2666]: E0527 03:23:50.922254 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:23:52.105759 containerd[1579]: time="2025-05-27T03:23:52.105696628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:52.106928 containerd[1579]: time="2025-05-27T03:23:52.106777272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:23:52.108341 containerd[1579]: time="2025-05-27T03:23:52.108300688Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:52.111251 containerd[1579]: time="2025-05-27T03:23:52.111166550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:23:52.111822 containerd[1579]: time="2025-05-27T03:23:52.111777370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.108881142s" May 27 03:23:52.111822 containerd[1579]: time="2025-05-27T03:23:52.111814760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:23:52.117774 containerd[1579]: time="2025-05-27T03:23:52.117717346Z" level=info msg="CreateContainer within sandbox \"20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:23:52.127179 containerd[1579]: time="2025-05-27T03:23:52.127123604Z" level=info msg="Container a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5: CDI devices from CRI Config.CDIDevices: []" May 27 03:23:52.140050 containerd[1579]: time="2025-05-27T03:23:52.139984944Z" level=info msg="CreateContainer within sandbox \"20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5\"" May 27 03:23:52.140714 containerd[1579]: time="2025-05-27T03:23:52.140668229Z" level=info msg="StartContainer for \"a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5\"" May 27 03:23:52.142470 containerd[1579]: time="2025-05-27T03:23:52.142444992Z" level=info msg="connecting to shim a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5" address="unix:///run/containerd/s/68697300d6e2f38c593bc65073b26c999d9cd2cfab83bc166420769d7854038b" protocol=ttrpc version=3 May 27 03:23:52.167408 systemd[1]: Started cri-containerd-a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5.scope - libcontainer container a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5. May 27 03:23:52.220564 containerd[1579]: time="2025-05-27T03:23:52.220448760Z" level=info msg="StartContainer for \"a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5\" returns successfully" May 27 03:23:52.922410 kubelet[2666]: E0527 03:23:52.922335 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:23:54.155279 containerd[1579]: time="2025-05-27T03:23:54.155226892Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:23:54.158040 systemd[1]: cri-containerd-a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5.scope: Deactivated successfully. May 27 03:23:54.158484 systemd[1]: cri-containerd-a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5.scope: Consumed 646ms CPU time, 182.1M memory peak, 2.7M read from disk, 170.9M written to disk. May 27 03:23:54.160048 containerd[1579]: time="2025-05-27T03:23:54.159993557Z" level=info msg="received exit event container_id:\"a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5\" id:\"a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5\" pid:3424 exited_at:{seconds:1748316234 nanos:159769856}" May 27 03:23:54.160195 containerd[1579]: time="2025-05-27T03:23:54.160154599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5\" id:\"a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5\" pid:3424 exited_at:{seconds:1748316234 nanos:159769856}" May 27 03:23:54.184674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a476f14aaf38378fa962e3a822f56ba760b5f76c79325608b43e5b36f99dc0d5-rootfs.mount: Deactivated successfully. May 27 03:23:54.219232 kubelet[2666]: I0527 03:23:54.219113 2666 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:23:54.484370 systemd[1]: Created slice kubepods-burstable-pod0bd63e18_7617_48c6_b7cb_0f923685a1d3.slice - libcontainer container kubepods-burstable-pod0bd63e18_7617_48c6_b7cb_0f923685a1d3.slice. May 27 03:23:54.503007 systemd[1]: Created slice kubepods-burstable-pod06887eb6_6182_4a42_b24d_205b481aae1e.slice - libcontainer container kubepods-burstable-pod06887eb6_6182_4a42_b24d_205b481aae1e.slice. May 27 03:23:54.510280 systemd[1]: Created slice kubepods-besteffort-pod1a3d3daf_895d_4366_a330_59c562670d8f.slice - libcontainer container kubepods-besteffort-pod1a3d3daf_895d_4366_a330_59c562670d8f.slice. May 27 03:23:54.519692 systemd[1]: Created slice kubepods-besteffort-podecba4ce8_cd3e_4b9d_9c7a_97bf8056f64f.slice - libcontainer container kubepods-besteffort-podecba4ce8_cd3e_4b9d_9c7a_97bf8056f64f.slice. May 27 03:23:54.528696 systemd[1]: Created slice kubepods-besteffort-podcc9060dd_bed3_42dd_955b_36f0d660ea40.slice - libcontainer container kubepods-besteffort-podcc9060dd_bed3_42dd_955b_36f0d660ea40.slice. May 27 03:23:54.535788 systemd[1]: Created slice kubepods-besteffort-pod32672c55_4bd7_4610_9392_e91638e32b95.slice - libcontainer container kubepods-besteffort-pod32672c55_4bd7_4610_9392_e91638e32b95.slice. May 27 03:23:54.536659 kubelet[2666]: I0527 03:23:54.536537 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4svg6\" (UniqueName: \"kubernetes.io/projected/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-kube-api-access-4svg6\") pod \"calico-apiserver-8568ddb668-jkxdz\" (UID: \"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f\") " pod="calico-apiserver/calico-apiserver-8568ddb668-jkxdz" May 27 03:23:54.536659 kubelet[2666]: I0527 03:23:54.536586 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58gc\" (UniqueName: \"kubernetes.io/projected/32672c55-4bd7-4610-9392-e91638e32b95-kube-api-access-p58gc\") pod \"calico-apiserver-8568ddb668-m4k58\" (UID: \"32672c55-4bd7-4610-9392-e91638e32b95\") " pod="calico-apiserver/calico-apiserver-8568ddb668-m4k58" May 27 03:23:54.536659 kubelet[2666]: I0527 03:23:54.536613 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a3d3daf-895d-4366-a330-59c562670d8f-tigera-ca-bundle\") pod \"calico-kube-controllers-7465d6c55f-6nbj6\" (UID: \"1a3d3daf-895d-4366-a330-59c562670d8f\") " pod="calico-system/calico-kube-controllers-7465d6c55f-6nbj6" May 27 03:23:54.536659 kubelet[2666]: I0527 03:23:54.536628 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb8wn\" (UniqueName: \"kubernetes.io/projected/1a3d3daf-895d-4366-a330-59c562670d8f-kube-api-access-lb8wn\") pod \"calico-kube-controllers-7465d6c55f-6nbj6\" (UID: \"1a3d3daf-895d-4366-a330-59c562670d8f\") " pod="calico-system/calico-kube-controllers-7465d6c55f-6nbj6" May 27 03:23:54.536659 kubelet[2666]: I0527 03:23:54.536647 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjbb\" (UniqueName: \"kubernetes.io/projected/06887eb6-6182-4a42-b24d-205b481aae1e-kube-api-access-chjbb\") pod \"coredns-674b8bbfcf-f6m4x\" (UID: \"06887eb6-6182-4a42-b24d-205b481aae1e\") " pod="kube-system/coredns-674b8bbfcf-f6m4x" May 27 03:23:54.536951 kubelet[2666]: I0527 03:23:54.536665 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6bt\" (UniqueName: \"kubernetes.io/projected/0bd63e18-7617-48c6-b7cb-0f923685a1d3-kube-api-access-jk6bt\") pod \"coredns-674b8bbfcf-rmmk9\" (UID: \"0bd63e18-7617-48c6-b7cb-0f923685a1d3\") " pod="kube-system/coredns-674b8bbfcf-rmmk9" May 27 03:23:54.536951 kubelet[2666]: I0527 03:23:54.536681 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-calico-apiserver-certs\") pod \"calico-apiserver-8568ddb668-jkxdz\" (UID: \"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f\") " pod="calico-apiserver/calico-apiserver-8568ddb668-jkxdz" May 27 03:23:54.536951 kubelet[2666]: I0527 03:23:54.536696 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv7t4\" (UniqueName: \"kubernetes.io/projected/cc9060dd-bed3-42dd-955b-36f0d660ea40-kube-api-access-lv7t4\") pod \"goldmane-78d55f7ddc-kxppn\" (UID: \"cc9060dd-bed3-42dd-955b-36f0d660ea40\") " pod="calico-system/goldmane-78d55f7ddc-kxppn" May 27 03:23:54.536951 kubelet[2666]: I0527 03:23:54.536710 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06887eb6-6182-4a42-b24d-205b481aae1e-config-volume\") pod \"coredns-674b8bbfcf-f6m4x\" (UID: \"06887eb6-6182-4a42-b24d-205b481aae1e\") " pod="kube-system/coredns-674b8bbfcf-f6m4x" May 27 03:23:54.536951 kubelet[2666]: I0527 03:23:54.536729 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32672c55-4bd7-4610-9392-e91638e32b95-calico-apiserver-certs\") pod \"calico-apiserver-8568ddb668-m4k58\" (UID: \"32672c55-4bd7-4610-9392-e91638e32b95\") " pod="calico-apiserver/calico-apiserver-8568ddb668-m4k58" May 27 03:23:54.537074 kubelet[2666]: I0527 03:23:54.536747 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc9060dd-bed3-42dd-955b-36f0d660ea40-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-kxppn\" (UID: \"cc9060dd-bed3-42dd-955b-36f0d660ea40\") " pod="calico-system/goldmane-78d55f7ddc-kxppn" May 27 03:23:54.537074 kubelet[2666]: I0527 03:23:54.536766 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-ca-bundle\") pod \"whisker-6d87c48bf7-rtnlj\" (UID: \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\") " pod="calico-system/whisker-6d87c48bf7-rtnlj" May 27 03:23:54.537074 kubelet[2666]: I0527 03:23:54.536797 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gwvf\" (UniqueName: \"kubernetes.io/projected/f95accb9-2129-4c05-aa43-61cf3ea066cd-kube-api-access-6gwvf\") pod \"calico-apiserver-5c55d66cb-lmvsk\" (UID: \"f95accb9-2129-4c05-aa43-61cf3ea066cd\") " pod="calico-apiserver/calico-apiserver-5c55d66cb-lmvsk" May 27 03:23:54.537074 kubelet[2666]: I0527 03:23:54.536817 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc9060dd-bed3-42dd-955b-36f0d660ea40-config\") pod \"goldmane-78d55f7ddc-kxppn\" (UID: \"cc9060dd-bed3-42dd-955b-36f0d660ea40\") " pod="calico-system/goldmane-78d55f7ddc-kxppn" May 27 03:23:54.537074 kubelet[2666]: I0527 03:23:54.536837 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cc9060dd-bed3-42dd-955b-36f0d660ea40-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-kxppn\" (UID: \"cc9060dd-bed3-42dd-955b-36f0d660ea40\") " pod="calico-system/goldmane-78d55f7ddc-kxppn" May 27 03:23:54.537222 kubelet[2666]: I0527 03:23:54.536858 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f95accb9-2129-4c05-aa43-61cf3ea066cd-calico-apiserver-certs\") pod \"calico-apiserver-5c55d66cb-lmvsk\" (UID: \"f95accb9-2129-4c05-aa43-61cf3ea066cd\") " pod="calico-apiserver/calico-apiserver-5c55d66cb-lmvsk" May 27 03:23:54.537222 kubelet[2666]: I0527 03:23:54.536875 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bd63e18-7617-48c6-b7cb-0f923685a1d3-config-volume\") pod \"coredns-674b8bbfcf-rmmk9\" (UID: \"0bd63e18-7617-48c6-b7cb-0f923685a1d3\") " pod="kube-system/coredns-674b8bbfcf-rmmk9" May 27 03:23:54.537222 kubelet[2666]: I0527 03:23:54.536908 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-backend-key-pair\") pod \"whisker-6d87c48bf7-rtnlj\" (UID: \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\") " pod="calico-system/whisker-6d87c48bf7-rtnlj" May 27 03:23:54.537222 kubelet[2666]: I0527 03:23:54.536926 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjb8\" (UniqueName: \"kubernetes.io/projected/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-kube-api-access-stjb8\") pod \"whisker-6d87c48bf7-rtnlj\" (UID: \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\") " pod="calico-system/whisker-6d87c48bf7-rtnlj" May 27 03:23:54.543706 systemd[1]: Created slice kubepods-besteffort-pod6bb61fee_af34_4eb5_88db_e6dd1f66a33a.slice - libcontainer container kubepods-besteffort-pod6bb61fee_af34_4eb5_88db_e6dd1f66a33a.slice. May 27 03:23:54.548526 systemd[1]: Created slice kubepods-besteffort-podf95accb9_2129_4c05_aa43_61cf3ea066cd.slice - libcontainer container kubepods-besteffort-podf95accb9_2129_4c05_aa43_61cf3ea066cd.slice. May 27 03:23:54.789491 containerd[1579]: time="2025-05-27T03:23:54.789412899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmmk9,Uid:0bd63e18-7617-48c6-b7cb-0f923685a1d3,Namespace:kube-system,Attempt:0,}" May 27 03:23:54.807523 containerd[1579]: time="2025-05-27T03:23:54.807463268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6m4x,Uid:06887eb6-6182-4a42-b24d-205b481aae1e,Namespace:kube-system,Attempt:0,}" May 27 03:23:54.816812 containerd[1579]: time="2025-05-27T03:23:54.816732612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7465d6c55f-6nbj6,Uid:1a3d3daf-895d-4366-a330-59c562670d8f,Namespace:calico-system,Attempt:0,}" May 27 03:23:54.825454 containerd[1579]: time="2025-05-27T03:23:54.825308642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-jkxdz,Uid:ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:54.832618 containerd[1579]: time="2025-05-27T03:23:54.832423163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kxppn,Uid:cc9060dd-bed3-42dd-955b-36f0d660ea40,Namespace:calico-system,Attempt:0,}" May 27 03:23:54.841452 containerd[1579]: time="2025-05-27T03:23:54.841387062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-m4k58,Uid:32672c55-4bd7-4610-9392-e91638e32b95,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:54.848749 containerd[1579]: time="2025-05-27T03:23:54.848663268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d87c48bf7-rtnlj,Uid:6bb61fee-af34-4eb5-88db-e6dd1f66a33a,Namespace:calico-system,Attempt:0,}" May 27 03:23:54.853227 containerd[1579]: time="2025-05-27T03:23:54.853105302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55d66cb-lmvsk,Uid:f95accb9-2129-4c05-aa43-61cf3ea066cd,Namespace:calico-apiserver,Attempt:0,}" May 27 03:23:54.932638 systemd[1]: Created slice kubepods-besteffort-pod57dcdeb3_a1bc_43ee_a490_a9f0e2678204.slice - libcontainer container kubepods-besteffort-pod57dcdeb3_a1bc_43ee_a490_a9f0e2678204.slice. May 27 03:23:54.938000 containerd[1579]: time="2025-05-27T03:23:54.937879575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m8zk9,Uid:57dcdeb3-a1bc-43ee-a490-a9f0e2678204,Namespace:calico-system,Attempt:0,}" May 27 03:23:54.967373 containerd[1579]: time="2025-05-27T03:23:54.967307544Z" level=error msg="Failed to destroy network for sandbox \"712efc8f31fc56745e4f3cc3391d0f088a8204c96da137ba9d2df8edffd92097\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:54.981979 containerd[1579]: time="2025-05-27T03:23:54.981821021Z" level=error msg="Failed to destroy network for sandbox \"aa71c3b6ba343a7e540624af6bfb91596ebbe5c264bfd54a84e7e3b0241381cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.042347 containerd[1579]: time="2025-05-27T03:23:54.985375125Z" level=error msg="Failed to destroy network for sandbox \"036904c41e10e1e4bef323936f3d45e0f0ef1827d69a013ce04c81a6dabb2455\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.051107 containerd[1579]: time="2025-05-27T03:23:55.043747708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6m4x,Uid:06887eb6-6182-4a42-b24d-205b481aae1e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"712efc8f31fc56745e4f3cc3391d0f088a8204c96da137ba9d2df8edffd92097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.051479 containerd[1579]: time="2025-05-27T03:23:55.043747437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7465d6c55f-6nbj6,Uid:1a3d3daf-895d-4366-a330-59c562670d8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa71c3b6ba343a7e540624af6bfb91596ebbe5c264bfd54a84e7e3b0241381cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.051651 containerd[1579]: time="2025-05-27T03:23:54.999404111Z" level=error msg="Failed to destroy network for sandbox \"1a5454da69c99fff172ce9df52ac7c7e33a32d7185569ca66060de35aab59ac8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.051858 containerd[1579]: time="2025-05-27T03:23:55.013676369Z" level=error msg="Failed to destroy network for sandbox \"f4e90a500abf26446ee8214e1ba00c467377d1036119f5ba0f1154b21737c146\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.052003 containerd[1579]: time="2025-05-27T03:23:55.027593590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:23:55.052092 containerd[1579]: time="2025-05-27T03:23:55.047450060Z" level=error msg="Failed to destroy network for sandbox \"a0962167d12ed5b0b58c7b2f17e6a2e17881ff69c54e4952d8d22001918b7ffd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.052298 containerd[1579]: time="2025-05-27T03:23:54.997745181Z" level=error msg="Failed to destroy network for sandbox \"d964a4396cdae7bfe28fe7b2a0cb6abb204c4068d47ddea80a56298545313ee5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.061236 containerd[1579]: time="2025-05-27T03:23:55.060963451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-m4k58,Uid:32672c55-4bd7-4610-9392-e91638e32b95,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"036904c41e10e1e4bef323936f3d45e0f0ef1827d69a013ce04c81a6dabb2455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.066246 containerd[1579]: time="2025-05-27T03:23:55.066192474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-jkxdz,Uid:ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d964a4396cdae7bfe28fe7b2a0cb6abb204c4068d47ddea80a56298545313ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.069630 containerd[1579]: time="2025-05-27T03:23:55.069601434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmmk9,Uid:0bd63e18-7617-48c6-b7cb-0f923685a1d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5454da69c99fff172ce9df52ac7c7e33a32d7185569ca66060de35aab59ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.071438 containerd[1579]: time="2025-05-27T03:23:55.071385269Z" level=error msg="Failed to destroy network for sandbox \"737104175e0b0dd819ad447e9785856715c2c829e66410a087e25e5a97fbcee9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.074195 containerd[1579]: time="2025-05-27T03:23:55.074140850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d87c48bf7-rtnlj,Uid:6bb61fee-af34-4eb5-88db-e6dd1f66a33a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0962167d12ed5b0b58c7b2f17e6a2e17881ff69c54e4952d8d22001918b7ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.075574 containerd[1579]: time="2025-05-27T03:23:55.075447107Z" level=error msg="Failed to destroy network for sandbox \"92cb6601f280ffe0ff55a002878983e23a6a46b12f01f3cadc0eb4fb9aa8483d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.076274 containerd[1579]: time="2025-05-27T03:23:55.076149147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kxppn,Uid:cc9060dd-bed3-42dd-955b-36f0d660ea40,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e90a500abf26446ee8214e1ba00c467377d1036119f5ba0f1154b21737c146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.079444 kubelet[2666]: E0527 03:23:55.079387 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e90a500abf26446ee8214e1ba00c467377d1036119f5ba0f1154b21737c146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.079627 kubelet[2666]: E0527 03:23:55.079440 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d964a4396cdae7bfe28fe7b2a0cb6abb204c4068d47ddea80a56298545313ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.079627 kubelet[2666]: E0527 03:23:55.079472 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e90a500abf26446ee8214e1ba00c467377d1036119f5ba0f1154b21737c146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-kxppn" May 27 03:23:55.079627 kubelet[2666]: E0527 03:23:55.079495 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e90a500abf26446ee8214e1ba00c467377d1036119f5ba0f1154b21737c146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-kxppn" May 27 03:23:55.079627 kubelet[2666]: E0527 03:23:55.079503 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d964a4396cdae7bfe28fe7b2a0cb6abb204c4068d47ddea80a56298545313ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8568ddb668-jkxdz" May 27 03:23:55.079934 kubelet[2666]: E0527 03:23:55.079531 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d964a4396cdae7bfe28fe7b2a0cb6abb204c4068d47ddea80a56298545313ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8568ddb668-jkxdz" May 27 03:23:55.079934 kubelet[2666]: E0527 03:23:55.079565 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-kxppn_calico-system(cc9060dd-bed3-42dd-955b-36f0d660ea40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-kxppn_calico-system(cc9060dd-bed3-42dd-955b-36f0d660ea40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4e90a500abf26446ee8214e1ba00c467377d1036119f5ba0f1154b21737c146\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-kxppn" podUID="cc9060dd-bed3-42dd-955b-36f0d660ea40" May 27 03:23:55.079934 kubelet[2666]: E0527 03:23:55.079598 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8568ddb668-jkxdz_calico-apiserver(ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8568ddb668-jkxdz_calico-apiserver(ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d964a4396cdae7bfe28fe7b2a0cb6abb204c4068d47ddea80a56298545313ee5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8568ddb668-jkxdz" podUID="ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f" May 27 03:23:55.080133 kubelet[2666]: E0527 03:23:55.079654 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa71c3b6ba343a7e540624af6bfb91596ebbe5c264bfd54a84e7e3b0241381cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.080133 kubelet[2666]: E0527 03:23:55.079677 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa71c3b6ba343a7e540624af6bfb91596ebbe5c264bfd54a84e7e3b0241381cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7465d6c55f-6nbj6" May 27 03:23:55.080133 kubelet[2666]: E0527 03:23:55.079694 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa71c3b6ba343a7e540624af6bfb91596ebbe5c264bfd54a84e7e3b0241381cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7465d6c55f-6nbj6" May 27 03:23:55.080299 kubelet[2666]: E0527 03:23:55.079731 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7465d6c55f-6nbj6_calico-system(1a3d3daf-895d-4366-a330-59c562670d8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7465d6c55f-6nbj6_calico-system(1a3d3daf-895d-4366-a330-59c562670d8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa71c3b6ba343a7e540624af6bfb91596ebbe5c264bfd54a84e7e3b0241381cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7465d6c55f-6nbj6" podUID="1a3d3daf-895d-4366-a330-59c562670d8f" May 27 03:23:55.080299 kubelet[2666]: E0527 03:23:55.079853 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5454da69c99fff172ce9df52ac7c7e33a32d7185569ca66060de35aab59ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.080299 kubelet[2666]: E0527 03:23:55.079880 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5454da69c99fff172ce9df52ac7c7e33a32d7185569ca66060de35aab59ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rmmk9" May 27 03:23:55.080442 kubelet[2666]: E0527 03:23:55.079894 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5454da69c99fff172ce9df52ac7c7e33a32d7185569ca66060de35aab59ac8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rmmk9" May 27 03:23:55.080442 kubelet[2666]: E0527 03:23:55.079922 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rmmk9_kube-system(0bd63e18-7617-48c6-b7cb-0f923685a1d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rmmk9_kube-system(0bd63e18-7617-48c6-b7cb-0f923685a1d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a5454da69c99fff172ce9df52ac7c7e33a32d7185569ca66060de35aab59ac8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rmmk9" podUID="0bd63e18-7617-48c6-b7cb-0f923685a1d3" May 27 03:23:55.080442 kubelet[2666]: E0527 03:23:55.079947 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0962167d12ed5b0b58c7b2f17e6a2e17881ff69c54e4952d8d22001918b7ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.080542 kubelet[2666]: E0527 03:23:55.079963 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0962167d12ed5b0b58c7b2f17e6a2e17881ff69c54e4952d8d22001918b7ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d87c48bf7-rtnlj" May 27 03:23:55.080542 kubelet[2666]: E0527 03:23:55.079974 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0962167d12ed5b0b58c7b2f17e6a2e17881ff69c54e4952d8d22001918b7ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d87c48bf7-rtnlj" May 27 03:23:55.080542 kubelet[2666]: E0527 03:23:55.079996 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d87c48bf7-rtnlj_calico-system(6bb61fee-af34-4eb5-88db-e6dd1f66a33a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d87c48bf7-rtnlj_calico-system(6bb61fee-af34-4eb5-88db-e6dd1f66a33a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0962167d12ed5b0b58c7b2f17e6a2e17881ff69c54e4952d8d22001918b7ffd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d87c48bf7-rtnlj" podUID="6bb61fee-af34-4eb5-88db-e6dd1f66a33a" May 27 03:23:55.080621 kubelet[2666]: E0527 03:23:55.079383 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712efc8f31fc56745e4f3cc3391d0f088a8204c96da137ba9d2df8edffd92097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.080621 kubelet[2666]: E0527 03:23:55.080021 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712efc8f31fc56745e4f3cc3391d0f088a8204c96da137ba9d2df8edffd92097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f6m4x" May 27 03:23:55.080621 kubelet[2666]: E0527 03:23:55.080033 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712efc8f31fc56745e4f3cc3391d0f088a8204c96da137ba9d2df8edffd92097\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f6m4x" May 27 03:23:55.080711 kubelet[2666]: E0527 03:23:55.080057 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f6m4x_kube-system(06887eb6-6182-4a42-b24d-205b481aae1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f6m4x_kube-system(06887eb6-6182-4a42-b24d-205b481aae1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"712efc8f31fc56745e4f3cc3391d0f088a8204c96da137ba9d2df8edffd92097\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f6m4x" podUID="06887eb6-6182-4a42-b24d-205b481aae1e" May 27 03:23:55.081269 kubelet[2666]: E0527 03:23:55.079769 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036904c41e10e1e4bef323936f3d45e0f0ef1827d69a013ce04c81a6dabb2455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.081339 kubelet[2666]: E0527 03:23:55.081275 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036904c41e10e1e4bef323936f3d45e0f0ef1827d69a013ce04c81a6dabb2455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8568ddb668-m4k58" May 27 03:23:55.081339 kubelet[2666]: E0527 03:23:55.081301 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"036904c41e10e1e4bef323936f3d45e0f0ef1827d69a013ce04c81a6dabb2455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8568ddb668-m4k58" May 27 03:23:55.081414 kubelet[2666]: E0527 03:23:55.081348 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8568ddb668-m4k58_calico-apiserver(32672c55-4bd7-4610-9392-e91638e32b95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8568ddb668-m4k58_calico-apiserver(32672c55-4bd7-4610-9392-e91638e32b95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"036904c41e10e1e4bef323936f3d45e0f0ef1827d69a013ce04c81a6dabb2455\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8568ddb668-m4k58" podUID="32672c55-4bd7-4610-9392-e91638e32b95" May 27 03:23:55.085512 containerd[1579]: time="2025-05-27T03:23:55.085412907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55d66cb-lmvsk,Uid:f95accb9-2129-4c05-aa43-61cf3ea066cd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"737104175e0b0dd819ad447e9785856715c2c829e66410a087e25e5a97fbcee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.085784 kubelet[2666]: E0527 03:23:55.085723 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737104175e0b0dd819ad447e9785856715c2c829e66410a087e25e5a97fbcee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.085878 kubelet[2666]: E0527 03:23:55.085810 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737104175e0b0dd819ad447e9785856715c2c829e66410a087e25e5a97fbcee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c55d66cb-lmvsk" May 27 03:23:55.085878 kubelet[2666]: E0527 03:23:55.085836 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737104175e0b0dd819ad447e9785856715c2c829e66410a087e25e5a97fbcee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c55d66cb-lmvsk" May 27 03:23:55.086066 kubelet[2666]: E0527 03:23:55.085967 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c55d66cb-lmvsk_calico-apiserver(f95accb9-2129-4c05-aa43-61cf3ea066cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c55d66cb-lmvsk_calico-apiserver(f95accb9-2129-4c05-aa43-61cf3ea066cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"737104175e0b0dd819ad447e9785856715c2c829e66410a087e25e5a97fbcee9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c55d66cb-lmvsk" podUID="f95accb9-2129-4c05-aa43-61cf3ea066cd" May 27 03:23:55.086927 containerd[1579]: time="2025-05-27T03:23:55.086822839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m8zk9,Uid:57dcdeb3-a1bc-43ee-a490-a9f0e2678204,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92cb6601f280ffe0ff55a002878983e23a6a46b12f01f3cadc0eb4fb9aa8483d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.087227 kubelet[2666]: E0527 03:23:55.087179 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92cb6601f280ffe0ff55a002878983e23a6a46b12f01f3cadc0eb4fb9aa8483d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:23:55.087278 kubelet[2666]: E0527 03:23:55.087244 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92cb6601f280ffe0ff55a002878983e23a6a46b12f01f3cadc0eb4fb9aa8483d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m8zk9" May 27 03:23:55.087278 kubelet[2666]: E0527 03:23:55.087265 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92cb6601f280ffe0ff55a002878983e23a6a46b12f01f3cadc0eb4fb9aa8483d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m8zk9" May 27 03:23:55.087350 kubelet[2666]: E0527 03:23:55.087312 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m8zk9_calico-system(57dcdeb3-a1bc-43ee-a490-a9f0e2678204)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m8zk9_calico-system(57dcdeb3-a1bc-43ee-a490-a9f0e2678204)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92cb6601f280ffe0ff55a002878983e23a6a46b12f01f3cadc0eb4fb9aa8483d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m8zk9" podUID="57dcdeb3-a1bc-43ee-a490-a9f0e2678204" May 27 03:24:01.571167 kubelet[2666]: I0527 03:24:01.571112 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:02.538137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173103590.mount: Deactivated successfully. May 27 03:24:03.501074 containerd[1579]: time="2025-05-27T03:24:03.500896957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.516554 containerd[1579]: time="2025-05-27T03:24:03.516465210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:24:03.552800 containerd[1579]: time="2025-05-27T03:24:03.552730461Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.580333 containerd[1579]: time="2025-05-27T03:24:03.580286683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:03.581111 containerd[1579]: time="2025-05-27T03:24:03.581056158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 8.52894851s" May 27 03:24:03.581111 containerd[1579]: time="2025-05-27T03:24:03.581107855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:24:03.697997 containerd[1579]: time="2025-05-27T03:24:03.697910938Z" level=info msg="CreateContainer within sandbox \"20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:24:03.784943 containerd[1579]: time="2025-05-27T03:24:03.784714150Z" level=info msg="Container 58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:03.917473 containerd[1579]: time="2025-05-27T03:24:03.917257209Z" level=info msg="CreateContainer within sandbox \"20a05b7ce5d0423cf9039821e52a8954aad0f12da98242305e2fe2780fb53596\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a\"" May 27 03:24:03.918576 containerd[1579]: time="2025-05-27T03:24:03.918514140Z" level=info msg="StartContainer for \"58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a\"" May 27 03:24:03.921244 containerd[1579]: time="2025-05-27T03:24:03.921146014Z" level=info msg="connecting to shim 58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a" address="unix:///run/containerd/s/68697300d6e2f38c593bc65073b26c999d9cd2cfab83bc166420769d7854038b" protocol=ttrpc version=3 May 27 03:24:03.952446 systemd[1]: Started cri-containerd-58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a.scope - libcontainer container 58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a. May 27 03:24:04.015724 containerd[1579]: time="2025-05-27T03:24:04.015661879Z" level=info msg="StartContainer for \"58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a\" returns successfully" May 27 03:24:04.065013 kubelet[2666]: I0527 03:24:04.064843 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rspc2" podStartSLOduration=2.5266643699999998 podStartE2EDuration="24.064825579s" podCreationTimestamp="2025-05-27 03:23:40 +0000 UTC" firstStartedPulling="2025-05-27 03:23:42.044349261 +0000 UTC m=+21.216898137" lastFinishedPulling="2025-05-27 03:24:03.58251047 +0000 UTC m=+42.755059346" observedRunningTime="2025-05-27 03:24:04.063631607 +0000 UTC m=+43.236180493" watchObservedRunningTime="2025-05-27 03:24:04.064825579 +0000 UTC m=+43.237374455" May 27 03:24:04.114633 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:24:04.115628 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:24:04.154297 containerd[1579]: time="2025-05-27T03:24:04.154228387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a\" id:\"59fa3c131eb27e95121c4b7546e3420c039f13245307f32b10759ffb90831558\" pid:3825 exit_status:1 exited_at:{seconds:1748316244 nanos:153728287}" May 27 03:24:04.234792 kubelet[2666]: I0527 03:24:04.234739 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6bb61fee-af34-4eb5-88db-e6dd1f66a33a" (UID: "6bb61fee-af34-4eb5-88db-e6dd1f66a33a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:24:04.234986 kubelet[2666]: I0527 03:24:04.234849 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-ca-bundle\") pod \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\" (UID: \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\") " May 27 03:24:04.234986 kubelet[2666]: I0527 03:24:04.234909 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-backend-key-pair\") pod \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\" (UID: \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\") " May 27 03:24:04.234986 kubelet[2666]: I0527 03:24:04.234935 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stjb8\" (UniqueName: \"kubernetes.io/projected/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-kube-api-access-stjb8\") pod \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\" (UID: \"6bb61fee-af34-4eb5-88db-e6dd1f66a33a\") " May 27 03:24:04.235394 kubelet[2666]: I0527 03:24:04.235372 2666 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 03:24:04.240362 kubelet[2666]: I0527 03:24:04.240309 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6bb61fee-af34-4eb5-88db-e6dd1f66a33a" (UID: "6bb61fee-af34-4eb5-88db-e6dd1f66a33a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:24:04.241013 kubelet[2666]: I0527 03:24:04.240964 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-kube-api-access-stjb8" (OuterVolumeSpecName: "kube-api-access-stjb8") pod "6bb61fee-af34-4eb5-88db-e6dd1f66a33a" (UID: "6bb61fee-af34-4eb5-88db-e6dd1f66a33a"). InnerVolumeSpecName "kube-api-access-stjb8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:24:04.335725 kubelet[2666]: I0527 03:24:04.335576 2666 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 03:24:04.335725 kubelet[2666]: I0527 03:24:04.335617 2666 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-stjb8\" (UniqueName: \"kubernetes.io/projected/6bb61fee-af34-4eb5-88db-e6dd1f66a33a-kube-api-access-stjb8\") on node \"localhost\" DevicePath \"\"" May 27 03:24:04.590457 systemd[1]: var-lib-kubelet-pods-6bb61fee\x2daf34\x2d4eb5\x2d88db\x2de6dd1f66a33a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dstjb8.mount: Deactivated successfully. May 27 03:24:04.590576 systemd[1]: var-lib-kubelet-pods-6bb61fee\x2daf34\x2d4eb5\x2d88db\x2de6dd1f66a33a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:24:04.664122 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:47736.service - OpenSSH per-connection server daemon (10.0.0.1:47736). May 27 03:24:04.766071 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 47736 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:04.767673 sshd-session[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:04.778419 systemd-logind[1554]: New session 8 of user core. May 27 03:24:04.797392 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:24:04.937500 systemd[1]: Removed slice kubepods-besteffort-pod6bb61fee_af34_4eb5_88db_e6dd1f66a33a.slice - libcontainer container kubepods-besteffort-pod6bb61fee_af34_4eb5_88db_e6dd1f66a33a.slice. May 27 03:24:04.961520 sshd[3869]: Connection closed by 10.0.0.1 port 47736 May 27 03:24:04.961832 sshd-session[3867]: pam_unix(sshd:session): session closed for user core May 27 03:24:04.966051 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:47736.service: Deactivated successfully. May 27 03:24:04.968628 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:24:04.971351 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. May 27 03:24:04.972753 systemd-logind[1554]: Removed session 8. May 27 03:24:05.145244 containerd[1579]: time="2025-05-27T03:24:05.144919509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a\" id:\"7131d50254fff581e257d33c168a270e39cd3dce0683ec2dda87bb18ba1120c1\" pid:3895 exit_status:1 exited_at:{seconds:1748316245 nanos:143160085}" May 27 03:24:05.341832 kubelet[2666]: I0527 03:24:05.341648 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb9167b-8371-4b66-b231-77c5171049b6-whisker-ca-bundle\") pod \"whisker-57667c847c-4twcl\" (UID: \"2eb9167b-8371-4b66-b231-77c5171049b6\") " pod="calico-system/whisker-57667c847c-4twcl" May 27 03:24:05.341832 kubelet[2666]: I0527 03:24:05.341706 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2eb9167b-8371-4b66-b231-77c5171049b6-whisker-backend-key-pair\") pod \"whisker-57667c847c-4twcl\" (UID: \"2eb9167b-8371-4b66-b231-77c5171049b6\") " pod="calico-system/whisker-57667c847c-4twcl" May 27 03:24:05.341832 kubelet[2666]: I0527 03:24:05.341733 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpttd\" (UniqueName: \"kubernetes.io/projected/2eb9167b-8371-4b66-b231-77c5171049b6-kube-api-access-gpttd\") pod \"whisker-57667c847c-4twcl\" (UID: \"2eb9167b-8371-4b66-b231-77c5171049b6\") " pod="calico-system/whisker-57667c847c-4twcl" May 27 03:24:05.346443 systemd[1]: Created slice kubepods-besteffort-pod2eb9167b_8371_4b66_b231_77c5171049b6.slice - libcontainer container kubepods-besteffort-pod2eb9167b_8371_4b66_b231_77c5171049b6.slice. May 27 03:24:05.651120 containerd[1579]: time="2025-05-27T03:24:05.650683240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57667c847c-4twcl,Uid:2eb9167b-8371-4b66-b231-77c5171049b6,Namespace:calico-system,Attempt:0,}" May 27 03:24:05.922394 containerd[1579]: time="2025-05-27T03:24:05.922167587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-m4k58,Uid:32672c55-4bd7-4610-9392-e91638e32b95,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:05.922394 containerd[1579]: time="2025-05-27T03:24:05.922167607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m8zk9,Uid:57dcdeb3-a1bc-43ee-a490-a9f0e2678204,Namespace:calico-system,Attempt:0,}" May 27 03:24:05.922905 containerd[1579]: time="2025-05-27T03:24:05.922849097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kxppn,Uid:cc9060dd-bed3-42dd-955b-36f0d660ea40,Namespace:calico-system,Attempt:0,}" May 27 03:24:06.169195 systemd-networkd[1484]: vxlan.calico: Link UP May 27 03:24:06.169230 systemd-networkd[1484]: vxlan.calico: Gained carrier May 27 03:24:06.462958 systemd-networkd[1484]: cali766cafadde3: Link UP May 27 03:24:06.463517 systemd-networkd[1484]: cali766cafadde3: Gained carrier May 27 03:24:06.488063 containerd[1579]: 2025-05-27 03:24:06.221 [INFO][4066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57667c847c--4twcl-eth0 whisker-57667c847c- calico-system 2eb9167b-8371-4b66-b231-77c5171049b6 942 0 2025-05-27 03:24:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57667c847c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57667c847c-4twcl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali766cafadde3 [] [] }} ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-" May 27 03:24:06.488063 containerd[1579]: 2025-05-27 03:24:06.221 [INFO][4066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-eth0" May 27 03:24:06.488063 containerd[1579]: 2025-05-27 03:24:06.391 [INFO][4134] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" HandleID="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Workload="localhost-k8s-whisker--57667c847c--4twcl-eth0" May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.392 [INFO][4134] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" HandleID="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Workload="localhost-k8s-whisker--57667c847c--4twcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e0550), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57667c847c-4twcl", "timestamp":"2025-05-27 03:24:06.391096597 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.392 [INFO][4134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.392 [INFO][4134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.392 [INFO][4134] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.408 [INFO][4134] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" host="localhost" May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.421 [INFO][4134] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.431 [INFO][4134] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.434 [INFO][4134] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.437 [INFO][4134] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.488787 containerd[1579]: 2025-05-27 03:24:06.437 [INFO][4134] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" host="localhost" May 27 03:24:06.489094 containerd[1579]: 2025-05-27 03:24:06.439 [INFO][4134] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1 May 27 03:24:06.489094 containerd[1579]: 2025-05-27 03:24:06.448 [INFO][4134] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" host="localhost" May 27 03:24:06.489094 containerd[1579]: 2025-05-27 03:24:06.454 [INFO][4134] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" host="localhost" May 27 03:24:06.489094 containerd[1579]: 2025-05-27 03:24:06.454 [INFO][4134] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" host="localhost" May 27 03:24:06.489094 containerd[1579]: 2025-05-27 03:24:06.454 [INFO][4134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:06.489094 containerd[1579]: 2025-05-27 03:24:06.454 [INFO][4134] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" HandleID="k8s-pod-network.dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Workload="localhost-k8s-whisker--57667c847c--4twcl-eth0" May 27 03:24:06.489279 containerd[1579]: 2025-05-27 03:24:06.458 [INFO][4066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57667c847c--4twcl-eth0", GenerateName:"whisker-57667c847c-", Namespace:"calico-system", SelfLink:"", UID:"2eb9167b-8371-4b66-b231-77c5171049b6", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57667c847c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57667c847c-4twcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali766cafadde3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.489279 containerd[1579]: 2025-05-27 03:24:06.458 [INFO][4066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-eth0" May 27 03:24:06.489382 containerd[1579]: 2025-05-27 03:24:06.458 [INFO][4066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali766cafadde3 ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-eth0" May 27 03:24:06.489382 containerd[1579]: 2025-05-27 03:24:06.464 [INFO][4066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-eth0" May 27 03:24:06.489444 containerd[1579]: 2025-05-27 03:24:06.465 [INFO][4066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57667c847c--4twcl-eth0", GenerateName:"whisker-57667c847c-", Namespace:"calico-system", SelfLink:"", UID:"2eb9167b-8371-4b66-b231-77c5171049b6", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57667c847c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1", Pod:"whisker-57667c847c-4twcl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali766cafadde3", MAC:"22:5e:20:49:dd:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.489512 containerd[1579]: 2025-05-27 03:24:06.482 [INFO][4066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" Namespace="calico-system" Pod="whisker-57667c847c-4twcl" WorkloadEndpoint="localhost-k8s-whisker--57667c847c--4twcl-eth0" May 27 03:24:06.581740 systemd-networkd[1484]: cali38a2d139e45: Link UP May 27 03:24:06.584187 systemd-networkd[1484]: cali38a2d139e45: Gained carrier May 27 03:24:06.694230 containerd[1579]: 2025-05-27 03:24:06.250 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m8zk9-eth0 csi-node-driver- calico-system 57dcdeb3-a1bc-43ee-a490-a9f0e2678204 710 0 2025-05-27 03:23:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m8zk9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali38a2d139e45 [] [] }} ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-" May 27 03:24:06.694230 containerd[1579]: 2025-05-27 03:24:06.251 [INFO][4078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-eth0" May 27 03:24:06.694230 containerd[1579]: 2025-05-27 03:24:06.391 [INFO][4142] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" HandleID="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Workload="localhost-k8s-csi--node--driver--m8zk9-eth0" May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.392 [INFO][4142] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" HandleID="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Workload="localhost-k8s-csi--node--driver--m8zk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m8zk9", "timestamp":"2025-05-27 03:24:06.391478083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.392 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.454 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.454 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.509 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" host="localhost" May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.522 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.532 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.539 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.542 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.694585 containerd[1579]: 2025-05-27 03:24:06.542 [INFO][4142] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" host="localhost" May 27 03:24:06.695224 containerd[1579]: 2025-05-27 03:24:06.544 [INFO][4142] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1 May 27 03:24:06.695224 containerd[1579]: 2025-05-27 03:24:06.555 [INFO][4142] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" host="localhost" May 27 03:24:06.695224 containerd[1579]: 2025-05-27 03:24:06.566 [INFO][4142] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" host="localhost" May 27 03:24:06.695224 containerd[1579]: 2025-05-27 03:24:06.567 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" host="localhost" May 27 03:24:06.695224 containerd[1579]: 2025-05-27 03:24:06.567 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:06.695224 containerd[1579]: 2025-05-27 03:24:06.567 [INFO][4142] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" HandleID="k8s-pod-network.5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Workload="localhost-k8s-csi--node--driver--m8zk9-eth0" May 27 03:24:06.695407 containerd[1579]: 2025-05-27 03:24:06.575 [INFO][4078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m8zk9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57dcdeb3-a1bc-43ee-a490-a9f0e2678204", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m8zk9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38a2d139e45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.695478 containerd[1579]: 2025-05-27 03:24:06.575 [INFO][4078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-eth0" May 27 03:24:06.695478 containerd[1579]: 2025-05-27 03:24:06.575 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38a2d139e45 ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-eth0" May 27 03:24:06.695478 containerd[1579]: 2025-05-27 03:24:06.583 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-eth0" May 27 03:24:06.696501 containerd[1579]: 2025-05-27 03:24:06.585 [INFO][4078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m8zk9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"57dcdeb3-a1bc-43ee-a490-a9f0e2678204", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1", Pod:"csi-node-driver-m8zk9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38a2d139e45", MAC:"be:32:09:59:ea:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.696579 containerd[1579]: 2025-05-27 03:24:06.690 [INFO][4078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" Namespace="calico-system" Pod="csi-node-driver-m8zk9" WorkloadEndpoint="localhost-k8s-csi--node--driver--m8zk9-eth0" May 27 03:24:06.782962 systemd-networkd[1484]: cali00772f80b6b: Link UP May 27 03:24:06.783598 systemd-networkd[1484]: cali00772f80b6b: Gained carrier May 27 03:24:06.799501 containerd[1579]: time="2025-05-27T03:24:06.799434911Z" level=info msg="connecting to shim dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1" address="unix:///run/containerd/s/3a1502b5a5115fe53fd74ede1e217ffbe4edcec6bf3f44998209121641f56266" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:06.840451 systemd[1]: Started cri-containerd-dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1.scope - libcontainer container dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1. May 27 03:24:06.862242 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:06.868879 containerd[1579]: time="2025-05-27T03:24:06.868811702Z" level=info msg="connecting to shim 5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1" address="unix:///run/containerd/s/89d30ebd2651e5c046269c89ea348b1b95f947565abe47dba0824660faf25941" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:06.872288 containerd[1579]: 2025-05-27 03:24:06.221 [INFO][4053] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0 calico-apiserver-8568ddb668- calico-apiserver 32672c55-4bd7-4610-9392-e91638e32b95 828 0 2025-05-27 03:23:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8568ddb668 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8568ddb668-m4k58 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali00772f80b6b [] [] }} ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-" May 27 03:24:06.872288 containerd[1579]: 2025-05-27 03:24:06.221 [INFO][4053] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:24:06.872288 containerd[1579]: 2025-05-27 03:24:06.390 [INFO][4140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" HandleID="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Workload="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.393 [INFO][4140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" HandleID="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Workload="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8568ddb668-m4k58", "timestamp":"2025-05-27 03:24:06.390636833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.393 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.567 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.568 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.690 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" host="localhost" May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.713 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.723 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.727 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.731 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.872771 containerd[1579]: 2025-05-27 03:24:06.731 [INFO][4140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" host="localhost" May 27 03:24:06.874033 containerd[1579]: 2025-05-27 03:24:06.733 [INFO][4140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a May 27 03:24:06.874033 containerd[1579]: 2025-05-27 03:24:06.755 [INFO][4140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" host="localhost" May 27 03:24:06.874033 containerd[1579]: 2025-05-27 03:24:06.775 [INFO][4140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" host="localhost" May 27 03:24:06.874033 containerd[1579]: 2025-05-27 03:24:06.775 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" host="localhost" May 27 03:24:06.874033 containerd[1579]: 2025-05-27 03:24:06.775 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:06.874033 containerd[1579]: 2025-05-27 03:24:06.775 [INFO][4140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" HandleID="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Workload="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:24:06.874519 containerd[1579]: 2025-05-27 03:24:06.778 [INFO][4053] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0", GenerateName:"calico-apiserver-8568ddb668-", Namespace:"calico-apiserver", SelfLink:"", UID:"32672c55-4bd7-4610-9392-e91638e32b95", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8568ddb668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8568ddb668-m4k58", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00772f80b6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.874600 containerd[1579]: 2025-05-27 03:24:06.778 [INFO][4053] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:24:06.874600 containerd[1579]: 2025-05-27 03:24:06.778 [INFO][4053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00772f80b6b ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:24:06.874600 containerd[1579]: 2025-05-27 03:24:06.784 [INFO][4053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:24:06.874689 containerd[1579]: 2025-05-27 03:24:06.784 [INFO][4053] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0", GenerateName:"calico-apiserver-8568ddb668-", Namespace:"calico-apiserver", SelfLink:"", UID:"32672c55-4bd7-4610-9392-e91638e32b95", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8568ddb668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a", Pod:"calico-apiserver-8568ddb668-m4k58", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00772f80b6b", MAC:"12:47:98:ec:89:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.874761 containerd[1579]: 2025-05-27 03:24:06.859 [INFO][4053] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-m4k58" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:24:06.925233 containerd[1579]: time="2025-05-27T03:24:06.924446285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-jkxdz,Uid:ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:06.927231 containerd[1579]: time="2025-05-27T03:24:06.925978032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmmk9,Uid:0bd63e18-7617-48c6-b7cb-0f923685a1d3,Namespace:kube-system,Attempt:0,}" May 27 03:24:06.927623 containerd[1579]: time="2025-05-27T03:24:06.926778495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57667c847c-4twcl,Uid:2eb9167b-8371-4b66-b231-77c5171049b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcdc30d301cf2a844aecf8ec554ec9d80cd39f8e809774abcd1f149ed63e62f1\"" May 27 03:24:06.928830 kubelet[2666]: I0527 03:24:06.928800 2666 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb61fee-af34-4eb5-88db-e6dd1f66a33a" path="/var/lib/kubelet/pods/6bb61fee-af34-4eb5-88db-e6dd1f66a33a/volumes" May 27 03:24:06.930575 containerd[1579]: time="2025-05-27T03:24:06.930438008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:06.951708 systemd[1]: Started cri-containerd-5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1.scope - libcontainer container 5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1. May 27 03:24:06.951819 systemd-networkd[1484]: calif1702fc535b: Link UP May 27 03:24:06.953398 systemd-networkd[1484]: calif1702fc535b: Gained carrier May 27 03:24:06.972838 containerd[1579]: time="2025-05-27T03:24:06.972614770Z" level=info msg="connecting to shim b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" address="unix:///run/containerd/s/dec01fc805bb4cd3fa0e1702862c2b67e7bfcd1f07ac28e44dc8af7d282d4127" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:06.987759 containerd[1579]: 2025-05-27 03:24:06.290 [INFO][4113] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0 goldmane-78d55f7ddc- calico-system cc9060dd-bed3-42dd-955b-36f0d660ea40 829 0 2025-05-27 03:23:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-kxppn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif1702fc535b [] [] }} ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-" May 27 03:24:06.987759 containerd[1579]: 2025-05-27 03:24:06.291 [INFO][4113] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" May 27 03:24:06.987759 containerd[1579]: 2025-05-27 03:24:06.392 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" HandleID="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Workload="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.393 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" HandleID="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Workload="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004feb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-kxppn", "timestamp":"2025-05-27 03:24:06.392016384 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.393 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.775 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.775 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.857 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" host="localhost" May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.872 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.885 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.891 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.905 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:06.988575 containerd[1579]: 2025-05-27 03:24:06.905 [INFO][4157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" host="localhost" May 27 03:24:06.988823 containerd[1579]: 2025-05-27 03:24:06.912 [INFO][4157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283 May 27 03:24:06.988823 containerd[1579]: 2025-05-27 03:24:06.920 [INFO][4157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" host="localhost" May 27 03:24:06.988823 containerd[1579]: 2025-05-27 03:24:06.937 [INFO][4157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" host="localhost" May 27 03:24:06.988823 containerd[1579]: 2025-05-27 03:24:06.937 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" host="localhost" May 27 03:24:06.988823 containerd[1579]: 2025-05-27 03:24:06.937 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:06.988823 containerd[1579]: 2025-05-27 03:24:06.937 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" HandleID="k8s-pod-network.154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Workload="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" May 27 03:24:06.988959 containerd[1579]: 2025-05-27 03:24:06.943 [INFO][4113] cni-plugin/k8s.go 418: Populated endpoint ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"cc9060dd-bed3-42dd-955b-36f0d660ea40", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-kxppn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1702fc535b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.988959 containerd[1579]: 2025-05-27 03:24:06.944 [INFO][4113] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" May 27 03:24:06.989038 containerd[1579]: 2025-05-27 03:24:06.944 [INFO][4113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1702fc535b ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" May 27 03:24:06.989038 containerd[1579]: 2025-05-27 03:24:06.954 [INFO][4113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" May 27 03:24:06.989080 containerd[1579]: 2025-05-27 03:24:06.955 [INFO][4113] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"cc9060dd-bed3-42dd-955b-36f0d660ea40", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283", Pod:"goldmane-78d55f7ddc-kxppn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1702fc535b", MAC:"72:62:89:d1:5c:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:06.989127 containerd[1579]: 2025-05-27 03:24:06.973 [INFO][4113] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" Namespace="calico-system" Pod="goldmane-78d55f7ddc-kxppn" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--kxppn-eth0" May 27 03:24:07.001753 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:07.020575 systemd[1]: Started cri-containerd-b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a.scope - libcontainer container b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a. May 27 03:24:07.053317 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:07.056152 containerd[1579]: time="2025-05-27T03:24:07.056097240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m8zk9,Uid:57dcdeb3-a1bc-43ee-a490-a9f0e2678204,Namespace:calico-system,Attempt:0,} returns sandbox id \"5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1\"" May 27 03:24:07.065893 containerd[1579]: time="2025-05-27T03:24:07.065814265Z" level=info msg="connecting to shim 154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283" address="unix:///run/containerd/s/93bd0e8f727e22d0137e7a38841250899e4d76926ae047aedacec6ae7d418fdc" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:07.106644 systemd[1]: Started cri-containerd-154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283.scope - libcontainer container 154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283. May 27 03:24:07.140524 containerd[1579]: time="2025-05-27T03:24:07.140457103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-m4k58,Uid:32672c55-4bd7-4610-9392-e91638e32b95,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\"" May 27 03:24:07.142595 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:07.200743 systemd-networkd[1484]: cali06b3ba643ae: Link UP May 27 03:24:07.201640 containerd[1579]: time="2025-05-27T03:24:07.200890309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-kxppn,Uid:cc9060dd-bed3-42dd-955b-36f0d660ea40,Namespace:calico-system,Attempt:0,} returns sandbox id \"154cad189a3ee7477a063f115bd807a791a54716958fc925f69a232e11c50283\"" May 27 03:24:07.201047 systemd-networkd[1484]: cali06b3ba643ae: Gained carrier May 27 03:24:07.241280 containerd[1579]: 2025-05-27 03:24:07.044 [INFO][4323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0 coredns-674b8bbfcf- kube-system 0bd63e18-7617-48c6-b7cb-0f923685a1d3 820 0 2025-05-27 03:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rmmk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali06b3ba643ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-" May 27 03:24:07.241280 containerd[1579]: 2025-05-27 03:24:07.045 [INFO][4323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" May 27 03:24:07.241280 containerd[1579]: 2025-05-27 03:24:07.096 [INFO][4412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" HandleID="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Workload="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.097 [INFO][4412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" HandleID="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Workload="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124e20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rmmk9", "timestamp":"2025-05-27 03:24:07.096820868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.097 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.097 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.097 [INFO][4412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.106 [INFO][4412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" host="localhost" May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.135 [INFO][4412] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.149 [INFO][4412] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.153 [INFO][4412] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.158 [INFO][4412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:07.241562 containerd[1579]: 2025-05-27 03:24:07.158 [INFO][4412] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" host="localhost" May 27 03:24:07.241864 containerd[1579]: 2025-05-27 03:24:07.162 [INFO][4412] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794 May 27 03:24:07.241864 containerd[1579]: 2025-05-27 03:24:07.171 [INFO][4412] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" host="localhost" May 27 03:24:07.241864 containerd[1579]: 2025-05-27 03:24:07.183 [INFO][4412] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" host="localhost" May 27 03:24:07.241864 containerd[1579]: 2025-05-27 03:24:07.183 [INFO][4412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" host="localhost" May 27 03:24:07.241864 containerd[1579]: 2025-05-27 03:24:07.183 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:07.241864 containerd[1579]: 2025-05-27 03:24:07.183 [INFO][4412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" HandleID="k8s-pod-network.93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Workload="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" May 27 03:24:07.242046 containerd[1579]: 2025-05-27 03:24:07.195 [INFO][4323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0bd63e18-7617-48c6-b7cb-0f923685a1d3", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rmmk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06b3ba643ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:07.242145 containerd[1579]: 2025-05-27 03:24:07.195 [INFO][4323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" May 27 03:24:07.242145 containerd[1579]: 2025-05-27 03:24:07.195 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06b3ba643ae ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" May 27 03:24:07.242145 containerd[1579]: 2025-05-27 03:24:07.201 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" May 27 03:24:07.242268 containerd[1579]: 2025-05-27 03:24:07.202 [INFO][4323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0bd63e18-7617-48c6-b7cb-0f923685a1d3", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794", Pod:"coredns-674b8bbfcf-rmmk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06b3ba643ae", MAC:"56:e0:6c:01:c3:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:07.242268 containerd[1579]: 2025-05-27 03:24:07.235 [INFO][4323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmmk9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmmk9-eth0" May 27 03:24:07.274105 containerd[1579]: time="2025-05-27T03:24:07.273011532Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:07.348500 containerd[1579]: time="2025-05-27T03:24:07.348347240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:07.357115 containerd[1579]: time="2025-05-27T03:24:07.356927702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:07.357407 kubelet[2666]: E0527 03:24:07.357323 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:07.357619 kubelet[2666]: E0527 03:24:07.357420 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:07.358550 systemd-networkd[1484]: cali8975392843e: Link UP May 27 03:24:07.359446 containerd[1579]: time="2025-05-27T03:24:07.359303202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:24:07.360148 systemd-networkd[1484]: cali8975392843e: Gained carrier May 27 03:24:07.366511 kubelet[2666]: E0527 03:24:07.366102 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ec74fc749bc44b2b952a8b8671ba8094,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57667c847c-4twcl_calico-system(2eb9167b-8371-4b66-b231-77c5171049b6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.072 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0 calico-apiserver-8568ddb668- calico-apiserver ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f 827 0 2025-05-27 03:23:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8568ddb668 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8568ddb668-jkxdz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8975392843e [] [] }} ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.073 [INFO][4326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.133 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" HandleID="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Workload="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.134 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" HandleID="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Workload="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005902c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8568ddb668-jkxdz", "timestamp":"2025-05-27 03:24:07.133742806 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.134 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.184 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.184 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.219 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.242 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.255 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.259 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.263 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.263 [INFO][4430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.267 [INFO][4430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.317 [INFO][4430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.351 [INFO][4430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.351 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" host="localhost" May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.351 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:07.406616 containerd[1579]: 2025-05-27 03:24:07.351 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" HandleID="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Workload="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:07.407441 containerd[1579]: 2025-05-27 03:24:07.354 [INFO][4326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0", GenerateName:"calico-apiserver-8568ddb668-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8568ddb668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8568ddb668-jkxdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8975392843e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:07.407441 containerd[1579]: 2025-05-27 03:24:07.354 [INFO][4326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:07.407441 containerd[1579]: 2025-05-27 03:24:07.354 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8975392843e ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:07.407441 containerd[1579]: 2025-05-27 03:24:07.360 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:07.407441 containerd[1579]: 2025-05-27 03:24:07.365 [INFO][4326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0", GenerateName:"calico-apiserver-8568ddb668-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8568ddb668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e", Pod:"calico-apiserver-8568ddb668-jkxdz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8975392843e", MAC:"da:50:82:c2:80:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:07.407441 containerd[1579]: 2025-05-27 03:24:07.403 [INFO][4326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Namespace="calico-apiserver" Pod="calico-apiserver-8568ddb668-jkxdz" WorkloadEndpoint="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:07.452092 containerd[1579]: time="2025-05-27T03:24:07.451806557Z" level=info msg="connecting to shim 93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794" address="unix:///run/containerd/s/95e8e5e14cfb77009576de11381677c88cac33fab9d06a29ced7bf0cc0904b3d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:07.492567 systemd[1]: Started cri-containerd-93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794.scope - libcontainer container 93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794. May 27 03:24:07.507814 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:07.548465 systemd-networkd[1484]: cali766cafadde3: Gained IPv6LL May 27 03:24:07.610487 containerd[1579]: time="2025-05-27T03:24:07.610340187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmmk9,Uid:0bd63e18-7617-48c6-b7cb-0f923685a1d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794\"" May 27 03:24:07.690357 containerd[1579]: time="2025-05-27T03:24:07.690306019Z" level=info msg="CreateContainer within sandbox \"93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:07.805374 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL May 27 03:24:07.922628 containerd[1579]: time="2025-05-27T03:24:07.922462402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7465d6c55f-6nbj6,Uid:1a3d3daf-895d-4366-a330-59c562670d8f,Namespace:calico-system,Attempt:0,}" May 27 03:24:08.061453 systemd-networkd[1484]: calif1702fc535b: Gained IPv6LL May 27 03:24:08.129964 containerd[1579]: time="2025-05-27T03:24:08.129911132Z" level=info msg="connecting to shim 7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" address="unix:///run/containerd/s/81f9b61901b8bcbd3114b0a6a2aa3ca0e5f9be5b2906b764fb096fdff0bd6a67" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:08.169475 systemd[1]: Started cri-containerd-7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e.scope - libcontainer container 7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e. May 27 03:24:08.185330 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:08.279778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357061209.mount: Deactivated successfully. May 27 03:24:08.282223 containerd[1579]: time="2025-05-27T03:24:08.280293734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8568ddb668-jkxdz,Uid:ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\"" May 27 03:24:08.282223 containerd[1579]: time="2025-05-27T03:24:08.280531039Z" level=info msg="Container 408df4005024d8c931fe8c08263457554598bf6abfbf96cae67c079802f0125d: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:08.304820 containerd[1579]: time="2025-05-27T03:24:08.304742418Z" level=info msg="CreateContainer within sandbox \"93dad366b2943633a83ae5fa7441960395ce5dfef782f070f6914f53d9617794\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"408df4005024d8c931fe8c08263457554598bf6abfbf96cae67c079802f0125d\"" May 27 03:24:08.305786 containerd[1579]: time="2025-05-27T03:24:08.305669810Z" level=info msg="StartContainer for \"408df4005024d8c931fe8c08263457554598bf6abfbf96cae67c079802f0125d\"" May 27 03:24:08.307867 containerd[1579]: time="2025-05-27T03:24:08.307824305Z" level=info msg="connecting to shim 408df4005024d8c931fe8c08263457554598bf6abfbf96cae67c079802f0125d" address="unix:///run/containerd/s/95e8e5e14cfb77009576de11381677c88cac33fab9d06a29ced7bf0cc0904b3d" protocol=ttrpc version=3 May 27 03:24:08.317329 systemd-networkd[1484]: cali38a2d139e45: Gained IPv6LL May 27 03:24:08.343535 systemd[1]: Started cri-containerd-408df4005024d8c931fe8c08263457554598bf6abfbf96cae67c079802f0125d.scope - libcontainer container 408df4005024d8c931fe8c08263457554598bf6abfbf96cae67c079802f0125d. May 27 03:24:08.439410 systemd-networkd[1484]: calie311ed1f8ab: Link UP May 27 03:24:08.440468 systemd-networkd[1484]: calie311ed1f8ab: Gained carrier May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.313 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0 calico-kube-controllers-7465d6c55f- calico-system 1a3d3daf-895d-4366-a330-59c562670d8f 826 0 2025-05-27 03:23:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7465d6c55f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7465d6c55f-6nbj6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie311ed1f8ab [] [] }} ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.314 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.366 [INFO][4598] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" HandleID="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Workload="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.366 [INFO][4598] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" HandleID="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Workload="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7465d6c55f-6nbj6", "timestamp":"2025-05-27 03:24:08.366132793 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.366 [INFO][4598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.366 [INFO][4598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.366 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.375 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.383 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.390 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.395 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.399 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.400 [INFO][4598] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.405 [INFO][4598] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0 May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.416 [INFO][4598] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.432 [INFO][4598] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.432 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" host="localhost" May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.432 [INFO][4598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:08.543229 containerd[1579]: 2025-05-27 03:24:08.432 [INFO][4598] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" HandleID="k8s-pod-network.db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Workload="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" May 27 03:24:08.543850 containerd[1579]: 2025-05-27 03:24:08.435 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0", GenerateName:"calico-kube-controllers-7465d6c55f-", Namespace:"calico-system", SelfLink:"", UID:"1a3d3daf-895d-4366-a330-59c562670d8f", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7465d6c55f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7465d6c55f-6nbj6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie311ed1f8ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:08.543850 containerd[1579]: 2025-05-27 03:24:08.436 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" May 27 03:24:08.543850 containerd[1579]: 2025-05-27 03:24:08.436 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie311ed1f8ab ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" May 27 03:24:08.543850 containerd[1579]: 2025-05-27 03:24:08.441 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" May 27 03:24:08.543850 containerd[1579]: 2025-05-27 03:24:08.441 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0", GenerateName:"calico-kube-controllers-7465d6c55f-", Namespace:"calico-system", SelfLink:"", UID:"1a3d3daf-895d-4366-a330-59c562670d8f", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7465d6c55f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0", Pod:"calico-kube-controllers-7465d6c55f-6nbj6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie311ed1f8ab", MAC:"82:ef:6d:57:61:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:08.543850 containerd[1579]: 2025-05-27 03:24:08.539 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" Namespace="calico-system" Pod="calico-kube-controllers-7465d6c55f-6nbj6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7465d6c55f--6nbj6-eth0" May 27 03:24:08.572403 systemd-networkd[1484]: cali8975392843e: Gained IPv6LL May 27 03:24:08.604086 containerd[1579]: time="2025-05-27T03:24:08.604030411Z" level=info msg="StartContainer for \"408df4005024d8c931fe8c08263457554598bf6abfbf96cae67c079802f0125d\" returns successfully" May 27 03:24:08.637390 systemd-networkd[1484]: cali00772f80b6b: Gained IPv6LL May 27 03:24:08.794124 containerd[1579]: time="2025-05-27T03:24:08.794053842Z" level=info msg="connecting to shim db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0" address="unix:///run/containerd/s/e17d0e5c74d5e91fcf81bac425cc5700fe38bbbb162a01637102f23f47c1ce19" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:08.833463 systemd[1]: Started cri-containerd-db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0.scope - libcontainer container db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0. May 27 03:24:08.848787 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:08.923056 containerd[1579]: time="2025-05-27T03:24:08.922981702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6m4x,Uid:06887eb6-6182-4a42-b24d-205b481aae1e,Namespace:kube-system,Attempt:0,}" May 27 03:24:09.003624 containerd[1579]: time="2025-05-27T03:24:09.003576192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7465d6c55f-6nbj6,Uid:1a3d3daf-895d-4366-a330-59c562670d8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0\"" May 27 03:24:09.020384 systemd-networkd[1484]: cali06b3ba643ae: Gained IPv6LL May 27 03:24:09.114853 kubelet[2666]: I0527 03:24:09.114709 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rmmk9" podStartSLOduration=41.114691453 podStartE2EDuration="41.114691453s" podCreationTimestamp="2025-05-27 03:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:09.114403863 +0000 UTC m=+48.286952739" watchObservedRunningTime="2025-05-27 03:24:09.114691453 +0000 UTC m=+48.287240329" May 27 03:24:09.136457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726211522.mount: Deactivated successfully. May 27 03:24:09.214544 systemd-networkd[1484]: calic0286daf9f1: Link UP May 27 03:24:09.215419 systemd-networkd[1484]: calic0286daf9f1: Gained carrier May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.069 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0 coredns-674b8bbfcf- kube-system 06887eb6-6182-4a42-b24d-205b481aae1e 823 0 2025-05-27 03:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-f6m4x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0286daf9f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.070 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.143 [INFO][4701] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" HandleID="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Workload="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.144 [INFO][4701] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" HandleID="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Workload="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003780d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-f6m4x", "timestamp":"2025-05-27 03:24:09.143893848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.144 [INFO][4701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.144 [INFO][4701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.144 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.154 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.162 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.169 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.171 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.187 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.187 [INFO][4701] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.189 [INFO][4701] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76 May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.197 [INFO][4701] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.207 [INFO][4701] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.207 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" host="localhost" May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.207 [INFO][4701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:09.288626 containerd[1579]: 2025-05-27 03:24:09.207 [INFO][4701] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" HandleID="k8s-pod-network.b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Workload="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" May 27 03:24:09.289301 containerd[1579]: 2025-05-27 03:24:09.211 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"06887eb6-6182-4a42-b24d-205b481aae1e", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-f6m4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0286daf9f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:09.289301 containerd[1579]: 2025-05-27 03:24:09.212 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" May 27 03:24:09.289301 containerd[1579]: 2025-05-27 03:24:09.212 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0286daf9f1 ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" May 27 03:24:09.289301 containerd[1579]: 2025-05-27 03:24:09.214 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" May 27 03:24:09.289301 containerd[1579]: 2025-05-27 03:24:09.215 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"06887eb6-6182-4a42-b24d-205b481aae1e", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76", Pod:"coredns-674b8bbfcf-f6m4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0286daf9f1", MAC:"52:2b:1a:c4:bc:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:09.289301 containerd[1579]: 2025-05-27 03:24:09.284 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" Namespace="kube-system" Pod="coredns-674b8bbfcf-f6m4x" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--f6m4x-eth0" May 27 03:24:09.345323 containerd[1579]: time="2025-05-27T03:24:09.345253215Z" level=info msg="connecting to shim b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76" address="unix:///run/containerd/s/4ed66da3c7e6f7aa5b0f6e669d0384f1bfe8d8d795f0b7bda40eeadb78042d37" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:09.378431 systemd[1]: Started cri-containerd-b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76.scope - libcontainer container b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76. May 27 03:24:09.394359 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:09.436188 containerd[1579]: time="2025-05-27T03:24:09.436135430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f6m4x,Uid:06887eb6-6182-4a42-b24d-205b481aae1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76\"" May 27 03:24:09.444685 containerd[1579]: time="2025-05-27T03:24:09.444645427Z" level=info msg="CreateContainer within sandbox \"b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:24:09.481909 containerd[1579]: time="2025-05-27T03:24:09.481838875Z" level=info msg="Container 331a2be80d52475d29d42c9dcd4935f2fc254d06a3f3f7a1e029d4828aef5e66: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:09.608136 containerd[1579]: time="2025-05-27T03:24:09.608079292Z" level=info msg="CreateContainer within sandbox \"b2a7e525f4e65b861e0982c55f1202f9e97418053fc0c1da822c522b92f12e76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"331a2be80d52475d29d42c9dcd4935f2fc254d06a3f3f7a1e029d4828aef5e66\"" May 27 03:24:09.609268 containerd[1579]: time="2025-05-27T03:24:09.608702532Z" level=info msg="StartContainer for \"331a2be80d52475d29d42c9dcd4935f2fc254d06a3f3f7a1e029d4828aef5e66\"" May 27 03:24:09.609908 containerd[1579]: time="2025-05-27T03:24:09.609879701Z" level=info msg="connecting to shim 331a2be80d52475d29d42c9dcd4935f2fc254d06a3f3f7a1e029d4828aef5e66" address="unix:///run/containerd/s/4ed66da3c7e6f7aa5b0f6e669d0384f1bfe8d8d795f0b7bda40eeadb78042d37" protocol=ttrpc version=3 May 27 03:24:09.633478 systemd[1]: Started cri-containerd-331a2be80d52475d29d42c9dcd4935f2fc254d06a3f3f7a1e029d4828aef5e66.scope - libcontainer container 331a2be80d52475d29d42c9dcd4935f2fc254d06a3f3f7a1e029d4828aef5e66. May 27 03:24:09.672904 containerd[1579]: time="2025-05-27T03:24:09.672840857Z" level=info msg="StartContainer for \"331a2be80d52475d29d42c9dcd4935f2fc254d06a3f3f7a1e029d4828aef5e66\" returns successfully" May 27 03:24:09.783465 containerd[1579]: time="2025-05-27T03:24:09.783387460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:09.784924 containerd[1579]: time="2025-05-27T03:24:09.784860726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:24:09.786571 containerd[1579]: time="2025-05-27T03:24:09.786500494Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:09.789907 containerd[1579]: time="2025-05-27T03:24:09.789868186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:09.790631 containerd[1579]: time="2025-05-27T03:24:09.790585994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.431142389s" May 27 03:24:09.790631 containerd[1579]: time="2025-05-27T03:24:09.790626841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:24:09.791514 containerd[1579]: time="2025-05-27T03:24:09.791484722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:09.797607 containerd[1579]: time="2025-05-27T03:24:09.797566590Z" level=info msg="CreateContainer within sandbox \"5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:24:09.822253 containerd[1579]: time="2025-05-27T03:24:09.822181562Z" level=info msg="Container 6266c9e109ee4bb86dffbd3a073bbd86759b0ad2a79ddc67adcc7524157fcd5a: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:09.843886 containerd[1579]: time="2025-05-27T03:24:09.843824465Z" level=info msg="CreateContainer within sandbox \"5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6266c9e109ee4bb86dffbd3a073bbd86759b0ad2a79ddc67adcc7524157fcd5a\"" May 27 03:24:09.844563 containerd[1579]: time="2025-05-27T03:24:09.844519500Z" level=info msg="StartContainer for \"6266c9e109ee4bb86dffbd3a073bbd86759b0ad2a79ddc67adcc7524157fcd5a\"" May 27 03:24:09.845942 containerd[1579]: time="2025-05-27T03:24:09.845908257Z" level=info msg="connecting to shim 6266c9e109ee4bb86dffbd3a073bbd86759b0ad2a79ddc67adcc7524157fcd5a" address="unix:///run/containerd/s/89d30ebd2651e5c046269c89ea348b1b95f947565abe47dba0824660faf25941" protocol=ttrpc version=3 May 27 03:24:09.880500 systemd[1]: Started cri-containerd-6266c9e109ee4bb86dffbd3a073bbd86759b0ad2a79ddc67adcc7524157fcd5a.scope - libcontainer container 6266c9e109ee4bb86dffbd3a073bbd86759b0ad2a79ddc67adcc7524157fcd5a. May 27 03:24:09.922373 containerd[1579]: time="2025-05-27T03:24:09.922234003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55d66cb-lmvsk,Uid:f95accb9-2129-4c05-aa43-61cf3ea066cd,Namespace:calico-apiserver,Attempt:0,}" May 27 03:24:09.937071 containerd[1579]: time="2025-05-27T03:24:09.937012729Z" level=info msg="StartContainer for \"6266c9e109ee4bb86dffbd3a073bbd86759b0ad2a79ddc67adcc7524157fcd5a\" returns successfully" May 27 03:24:09.978956 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:47742.service - OpenSSH per-connection server daemon (10.0.0.1:47742). May 27 03:24:10.043137 sshd[4860]: Accepted publickey for core from 10.0.0.1 port 47742 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:10.045619 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:10.051817 systemd-logind[1554]: New session 9 of user core. May 27 03:24:10.057382 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:24:10.057809 systemd-networkd[1484]: cali229571fe06a: Link UP May 27 03:24:10.061090 systemd-networkd[1484]: cali229571fe06a: Gained carrier May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:09.970 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0 calico-apiserver-5c55d66cb- calico-apiserver f95accb9-2129-4c05-aa43-61cf3ea066cd 831 0 2025-05-27 03:23:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c55d66cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c55d66cb-lmvsk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali229571fe06a [] [] }} ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:09.970 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.001 [INFO][4855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" HandleID="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Workload="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.002 [INFO][4855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" HandleID="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Workload="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c55d66cb-lmvsk", "timestamp":"2025-05-27 03:24:10.001883479 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.002 [INFO][4855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.002 [INFO][4855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.002 [INFO][4855] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.011 [INFO][4855] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.021 [INFO][4855] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.030 [INFO][4855] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.032 [INFO][4855] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.034 [INFO][4855] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.034 [INFO][4855] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.036 [INFO][4855] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15 May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.040 [INFO][4855] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.050 [INFO][4855] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.050 [INFO][4855] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" host="localhost" May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.050 [INFO][4855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:10.121869 containerd[1579]: 2025-05-27 03:24:10.050 [INFO][4855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" HandleID="k8s-pod-network.8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Workload="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" May 27 03:24:10.243091 containerd[1579]: 2025-05-27 03:24:10.054 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0", GenerateName:"calico-apiserver-5c55d66cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f95accb9-2129-4c05-aa43-61cf3ea066cd", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55d66cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c55d66cb-lmvsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali229571fe06a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.243091 containerd[1579]: 2025-05-27 03:24:10.054 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" May 27 03:24:10.243091 containerd[1579]: 2025-05-27 03:24:10.054 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali229571fe06a ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" May 27 03:24:10.243091 containerd[1579]: 2025-05-27 03:24:10.063 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" May 27 03:24:10.243091 containerd[1579]: 2025-05-27 03:24:10.064 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0", GenerateName:"calico-apiserver-5c55d66cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f95accb9-2129-4c05-aa43-61cf3ea066cd", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c55d66cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15", Pod:"calico-apiserver-5c55d66cb-lmvsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali229571fe06a", MAC:"1a:67:dc:d3:3e:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:24:10.243091 containerd[1579]: 2025-05-27 03:24:10.118 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" Namespace="calico-apiserver" Pod="calico-apiserver-5c55d66cb-lmvsk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c55d66cb--lmvsk-eth0" May 27 03:24:10.285472 kubelet[2666]: I0527 03:24:10.285413 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f6m4x" podStartSLOduration=42.285399903 podStartE2EDuration="42.285399903s" podCreationTimestamp="2025-05-27 03:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:24:10.214822759 +0000 UTC m=+49.387371665" watchObservedRunningTime="2025-05-27 03:24:10.285399903 +0000 UTC m=+49.457948769" May 27 03:24:10.361745 sshd[4866]: Connection closed by 10.0.0.1 port 47742 May 27 03:24:10.361908 sshd-session[4860]: pam_unix(sshd:session): session closed for user core May 27 03:24:10.365598 containerd[1579]: time="2025-05-27T03:24:10.365503671Z" level=info msg="connecting to shim 8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15" address="unix:///run/containerd/s/66afb73c4437ef35248c9142f2c73556d6ae096f6d45bc65134c5016fed7f4e9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:24:10.369919 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:47742.service: Deactivated successfully. May 27 03:24:10.380180 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:24:10.383694 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. May 27 03:24:10.385799 systemd-logind[1554]: Removed session 9. May 27 03:24:10.401394 systemd[1]: Started cri-containerd-8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15.scope - libcontainer container 8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15. May 27 03:24:10.414627 systemd-resolved[1419]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:24:10.428370 systemd-networkd[1484]: calie311ed1f8ab: Gained IPv6LL May 27 03:24:10.519556 containerd[1579]: time="2025-05-27T03:24:10.519348654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c55d66cb-lmvsk,Uid:f95accb9-2129-4c05-aa43-61cf3ea066cd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15\"" May 27 03:24:10.556533 systemd-networkd[1484]: calic0286daf9f1: Gained IPv6LL May 27 03:24:11.900470 systemd-networkd[1484]: cali229571fe06a: Gained IPv6LL May 27 03:24:12.846797 containerd[1579]: time="2025-05-27T03:24:12.846684892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:12.851101 containerd[1579]: time="2025-05-27T03:24:12.851058172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:24:12.853479 containerd[1579]: time="2025-05-27T03:24:12.853421027Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:12.857620 containerd[1579]: time="2025-05-27T03:24:12.857569093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:12.858249 containerd[1579]: time="2025-05-27T03:24:12.858196831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.066676663s" May 27 03:24:12.858309 containerd[1579]: time="2025-05-27T03:24:12.858255251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:12.859282 containerd[1579]: time="2025-05-27T03:24:12.859247073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:12.869258 containerd[1579]: time="2025-05-27T03:24:12.869192371Z" level=info msg="CreateContainer within sandbox \"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:12.884361 containerd[1579]: time="2025-05-27T03:24:12.884299046Z" level=info msg="Container de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:12.896765 containerd[1579]: time="2025-05-27T03:24:12.896713530Z" level=info msg="CreateContainer within sandbox \"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\"" May 27 03:24:12.897451 containerd[1579]: time="2025-05-27T03:24:12.897408023Z" level=info msg="StartContainer for \"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\"" May 27 03:24:12.898854 containerd[1579]: time="2025-05-27T03:24:12.898818680Z" level=info msg="connecting to shim de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453" address="unix:///run/containerd/s/dec01fc805bb4cd3fa0e1702862c2b67e7bfcd1f07ac28e44dc8af7d282d4127" protocol=ttrpc version=3 May 27 03:24:12.925393 systemd[1]: Started cri-containerd-de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453.scope - libcontainer container de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453. May 27 03:24:13.086354 containerd[1579]: time="2025-05-27T03:24:13.086290357Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:13.157684 containerd[1579]: time="2025-05-27T03:24:13.157541953Z" level=info msg="StartContainer for \"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" returns successfully" May 27 03:24:13.193025 containerd[1579]: time="2025-05-27T03:24:13.192951556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:13.193025 containerd[1579]: time="2025-05-27T03:24:13.193009524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:13.196434 kubelet[2666]: E0527 03:24:13.196329 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:13.196434 kubelet[2666]: E0527 03:24:13.196393 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:13.197085 containerd[1579]: time="2025-05-27T03:24:13.196848259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:13.201960 kubelet[2666]: E0527 03:24:13.201855 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-kxppn_calico-system(cc9060dd-bed3-42dd-955b-36f0d660ea40): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:13.203101 kubelet[2666]: E0527 03:24:13.203041 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kxppn" podUID="cc9060dd-bed3-42dd-955b-36f0d660ea40" May 27 03:24:13.426373 containerd[1579]: time="2025-05-27T03:24:13.426212796Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:13.429722 containerd[1579]: time="2025-05-27T03:24:13.429575858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:13.429857 containerd[1579]: time="2025-05-27T03:24:13.429674193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:13.430507 kubelet[2666]: E0527 03:24:13.430420 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:13.430507 kubelet[2666]: E0527 03:24:13.430486 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:13.431151 kubelet[2666]: E0527 03:24:13.430748 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57667c847c-4twcl_calico-system(2eb9167b-8371-4b66-b231-77c5171049b6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:13.431628 containerd[1579]: time="2025-05-27T03:24:13.431438244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:13.432802 kubelet[2666]: E0527 03:24:13.432746 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57667c847c-4twcl" podUID="2eb9167b-8371-4b66-b231-77c5171049b6" May 27 03:24:13.896366 containerd[1579]: time="2025-05-27T03:24:13.896273047Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:13.943231 containerd[1579]: time="2025-05-27T03:24:13.942913450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:24:13.945697 containerd[1579]: time="2025-05-27T03:24:13.945646048Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 514.105243ms" May 27 03:24:13.945697 containerd[1579]: time="2025-05-27T03:24:13.945683929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:13.946928 containerd[1579]: time="2025-05-27T03:24:13.946703533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:24:14.139792 containerd[1579]: time="2025-05-27T03:24:14.139732176Z" level=info msg="CreateContainer within sandbox \"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:14.162735 kubelet[2666]: E0527 03:24:14.162581 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57667c847c-4twcl" podUID="2eb9167b-8371-4b66-b231-77c5171049b6" May 27 03:24:14.162946 kubelet[2666]: E0527 03:24:14.162773 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kxppn" podUID="cc9060dd-bed3-42dd-955b-36f0d660ea40" May 27 03:24:14.579163 containerd[1579]: time="2025-05-27T03:24:14.579083083Z" level=info msg="Container 5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:14.875996 kubelet[2666]: I0527 03:24:14.875687 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8568ddb668-m4k58" podStartSLOduration=32.160949696 podStartE2EDuration="37.875671526s" podCreationTimestamp="2025-05-27 03:23:37 +0000 UTC" firstStartedPulling="2025-05-27 03:24:07.144333793 +0000 UTC m=+46.316882669" lastFinishedPulling="2025-05-27 03:24:12.859055623 +0000 UTC m=+52.031604499" observedRunningTime="2025-05-27 03:24:14.573800248 +0000 UTC m=+53.746349144" watchObservedRunningTime="2025-05-27 03:24:14.875671526 +0000 UTC m=+54.048220402" May 27 03:24:15.105085 containerd[1579]: time="2025-05-27T03:24:15.104972462Z" level=info msg="CreateContainer within sandbox \"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\"" May 27 03:24:15.105824 containerd[1579]: time="2025-05-27T03:24:15.105760130Z" level=info msg="StartContainer for \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\"" May 27 03:24:15.107183 containerd[1579]: time="2025-05-27T03:24:15.107153094Z" level=info msg="connecting to shim 5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14" address="unix:///run/containerd/s/81f9b61901b8bcbd3114b0a6a2aa3ca0e5f9be5b2906b764fb096fdff0bd6a67" protocol=ttrpc version=3 May 27 03:24:15.137518 systemd[1]: Started cri-containerd-5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14.scope - libcontainer container 5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14. May 27 03:24:15.385767 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:56012.service - OpenSSH per-connection server daemon (10.0.0.1:56012). May 27 03:24:15.430036 containerd[1579]: time="2025-05-27T03:24:15.429966698Z" level=info msg="StartContainer for \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" returns successfully" May 27 03:24:15.446262 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 56012 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:15.600118 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:15.607808 systemd-logind[1554]: New session 10 of user core. May 27 03:24:15.614494 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:24:15.994872 sshd[5032]: Connection closed by 10.0.0.1 port 56012 May 27 03:24:15.995330 sshd-session[5030]: pam_unix(sshd:session): session closed for user core May 27 03:24:16.000136 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. May 27 03:24:16.000466 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:56012.service: Deactivated successfully. May 27 03:24:16.003230 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:24:16.006263 systemd-logind[1554]: Removed session 10. May 27 03:24:17.340707 kubelet[2666]: I0527 03:24:17.340634 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:18.248580 containerd[1579]: time="2025-05-27T03:24:18.248493870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:18.249677 containerd[1579]: time="2025-05-27T03:24:18.249633749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:24:18.252402 containerd[1579]: time="2025-05-27T03:24:18.252354575Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:18.256909 containerd[1579]: time="2025-05-27T03:24:18.256820876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:18.257846 containerd[1579]: time="2025-05-27T03:24:18.257774736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 4.311028011s" May 27 03:24:18.257846 containerd[1579]: time="2025-05-27T03:24:18.257833085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:24:18.262717 containerd[1579]: time="2025-05-27T03:24:18.262240866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:24:18.287219 containerd[1579]: time="2025-05-27T03:24:18.287148436Z" level=info msg="CreateContainer within sandbox \"db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:24:18.297892 containerd[1579]: time="2025-05-27T03:24:18.297834218Z" level=info msg="Container 358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:18.309036 containerd[1579]: time="2025-05-27T03:24:18.308981676Z" level=info msg="CreateContainer within sandbox \"db10a639df276c3f2f599208e03cffcea3364f988abd24e0506f78f6a3a544b0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0\"" May 27 03:24:18.309510 containerd[1579]: time="2025-05-27T03:24:18.309479551Z" level=info msg="StartContainer for \"358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0\"" May 27 03:24:18.311194 containerd[1579]: time="2025-05-27T03:24:18.311150406Z" level=info msg="connecting to shim 358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0" address="unix:///run/containerd/s/e17d0e5c74d5e91fcf81bac425cc5700fe38bbbb162a01637102f23f47c1ce19" protocol=ttrpc version=3 May 27 03:24:18.365405 systemd[1]: Started cri-containerd-358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0.scope - libcontainer container 358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0. May 27 03:24:18.746387 containerd[1579]: time="2025-05-27T03:24:18.746332070Z" level=info msg="StartContainer for \"358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0\" returns successfully" May 27 03:24:19.235252 containerd[1579]: time="2025-05-27T03:24:19.235181086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0\" id:\"fef6bfa13228da792a6d14a2b510efb3a6a8bdcc9d7c2e2aaa64bbae562e06b9\" pid:5120 exited_at:{seconds:1748316259 nanos:234956214}" May 27 03:24:19.283236 kubelet[2666]: I0527 03:24:19.281921 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7465d6c55f-6nbj6" podStartSLOduration=29.025065012 podStartE2EDuration="38.281886586s" podCreationTimestamp="2025-05-27 03:23:41 +0000 UTC" firstStartedPulling="2025-05-27 03:24:09.005174093 +0000 UTC m=+48.177722969" lastFinishedPulling="2025-05-27 03:24:18.261995647 +0000 UTC m=+57.434544543" observedRunningTime="2025-05-27 03:24:19.281719392 +0000 UTC m=+58.454268268" watchObservedRunningTime="2025-05-27 03:24:19.281886586 +0000 UTC m=+58.454435462" May 27 03:24:19.284225 kubelet[2666]: I0527 03:24:19.283774 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8568ddb668-jkxdz" podStartSLOduration=36.621733181 podStartE2EDuration="42.283751375s" podCreationTimestamp="2025-05-27 03:23:37 +0000 UTC" firstStartedPulling="2025-05-27 03:24:08.284607553 +0000 UTC m=+47.457156429" lastFinishedPulling="2025-05-27 03:24:13.946625737 +0000 UTC m=+53.119174623" observedRunningTime="2025-05-27 03:24:16.340343137 +0000 UTC m=+55.512892013" watchObservedRunningTime="2025-05-27 03:24:19.283751375 +0000 UTC m=+58.456300251" May 27 03:24:20.096928 containerd[1579]: time="2025-05-27T03:24:20.096854308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:20.097783 containerd[1579]: time="2025-05-27T03:24:20.097758044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:24:20.099249 containerd[1579]: time="2025-05-27T03:24:20.099195301Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:20.103922 containerd[1579]: time="2025-05-27T03:24:20.103877627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:20.104488 containerd[1579]: time="2025-05-27T03:24:20.104457905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.842176753s" May 27 03:24:20.104524 containerd[1579]: time="2025-05-27T03:24:20.104486709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:24:20.105382 containerd[1579]: time="2025-05-27T03:24:20.105321225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:24:20.118944 containerd[1579]: time="2025-05-27T03:24:20.118897670Z" level=info msg="CreateContainer within sandbox \"5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:24:20.130308 containerd[1579]: time="2025-05-27T03:24:20.130269567Z" level=info msg="Container 1649fdee0c2c53911833630185c21016ba099c354fd6bae97da71469a18b691b: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:20.142198 containerd[1579]: time="2025-05-27T03:24:20.142142767Z" level=info msg="CreateContainer within sandbox \"5eea8c7042e7d6c65aa4571b3d5f996558dd195ab2ead83fee28d854d74fabf1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1649fdee0c2c53911833630185c21016ba099c354fd6bae97da71469a18b691b\"" May 27 03:24:20.144383 containerd[1579]: time="2025-05-27T03:24:20.144345750Z" level=info msg="StartContainer for \"1649fdee0c2c53911833630185c21016ba099c354fd6bae97da71469a18b691b\"" May 27 03:24:20.145741 containerd[1579]: time="2025-05-27T03:24:20.145711663Z" level=info msg="connecting to shim 1649fdee0c2c53911833630185c21016ba099c354fd6bae97da71469a18b691b" address="unix:///run/containerd/s/89d30ebd2651e5c046269c89ea348b1b95f947565abe47dba0824660faf25941" protocol=ttrpc version=3 May 27 03:24:20.175332 systemd[1]: Started cri-containerd-1649fdee0c2c53911833630185c21016ba099c354fd6bae97da71469a18b691b.scope - libcontainer container 1649fdee0c2c53911833630185c21016ba099c354fd6bae97da71469a18b691b. May 27 03:24:20.224809 containerd[1579]: time="2025-05-27T03:24:20.224746141Z" level=info msg="StartContainer for \"1649fdee0c2c53911833630185c21016ba099c354fd6bae97da71469a18b691b\" returns successfully" May 27 03:24:20.816989 containerd[1579]: time="2025-05-27T03:24:20.816898079Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:24:20.819481 containerd[1579]: time="2025-05-27T03:24:20.819417787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:24:20.820980 containerd[1579]: time="2025-05-27T03:24:20.820945854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 715.595013ms" May 27 03:24:20.820980 containerd[1579]: time="2025-05-27T03:24:20.820973055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:24:20.828619 containerd[1579]: time="2025-05-27T03:24:20.828574287Z" level=info msg="CreateContainer within sandbox \"8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:24:20.901420 containerd[1579]: time="2025-05-27T03:24:20.901355483Z" level=info msg="Container aebdd7929dcd6801f359dd18e90926cf48edb5f480527a11cca2f3ee35974f43: CDI devices from CRI Config.CDIDevices: []" May 27 03:24:20.923908 containerd[1579]: time="2025-05-27T03:24:20.923858918Z" level=info msg="CreateContainer within sandbox \"8c02c8067ed992ea0f639ef60eab9781ba9c0a9f7d28a03dd2f3e9248412da15\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aebdd7929dcd6801f359dd18e90926cf48edb5f480527a11cca2f3ee35974f43\"" May 27 03:24:20.924332 containerd[1579]: time="2025-05-27T03:24:20.924252015Z" level=info msg="StartContainer for \"aebdd7929dcd6801f359dd18e90926cf48edb5f480527a11cca2f3ee35974f43\"" May 27 03:24:20.925706 containerd[1579]: time="2025-05-27T03:24:20.925667280Z" level=info msg="connecting to shim aebdd7929dcd6801f359dd18e90926cf48edb5f480527a11cca2f3ee35974f43" address="unix:///run/containerd/s/66afb73c4437ef35248c9142f2c73556d6ae096f6d45bc65134c5016fed7f4e9" protocol=ttrpc version=3 May 27 03:24:20.951409 systemd[1]: Started cri-containerd-aebdd7929dcd6801f359dd18e90926cf48edb5f480527a11cca2f3ee35974f43.scope - libcontainer container aebdd7929dcd6801f359dd18e90926cf48edb5f480527a11cca2f3ee35974f43. May 27 03:24:20.991844 kubelet[2666]: I0527 03:24:20.991617 2666 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:24:20.998309 kubelet[2666]: I0527 03:24:20.998269 2666 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:24:21.009460 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:56014.service - OpenSSH per-connection server daemon (10.0.0.1:56014). May 27 03:24:21.112503 containerd[1579]: time="2025-05-27T03:24:21.112461389Z" level=info msg="StartContainer for \"aebdd7929dcd6801f359dd18e90926cf48edb5f480527a11cca2f3ee35974f43\" returns successfully" May 27 03:24:21.135440 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 56014 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:21.137711 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:21.144445 systemd-logind[1554]: New session 11 of user core. May 27 03:24:21.156427 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:24:21.246999 kubelet[2666]: I0527 03:24:21.246718 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c55d66cb-lmvsk" podStartSLOduration=32.945721947 podStartE2EDuration="43.246694723s" podCreationTimestamp="2025-05-27 03:23:38 +0000 UTC" firstStartedPulling="2025-05-27 03:24:10.520659645 +0000 UTC m=+49.693208522" lastFinishedPulling="2025-05-27 03:24:20.821632422 +0000 UTC m=+59.994181298" observedRunningTime="2025-05-27 03:24:21.230724057 +0000 UTC m=+60.403272953" watchObservedRunningTime="2025-05-27 03:24:21.246694723 +0000 UTC m=+60.419243599" May 27 03:24:21.329758 sshd[5217]: Connection closed by 10.0.0.1 port 56014 May 27 03:24:21.330324 sshd-session[5205]: pam_unix(sshd:session): session closed for user core May 27 03:24:21.340365 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:56014.service: Deactivated successfully. May 27 03:24:21.342676 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:24:21.343683 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. May 27 03:24:21.347461 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:56028.service - OpenSSH per-connection server daemon (10.0.0.1:56028). May 27 03:24:21.348850 systemd-logind[1554]: Removed session 11. May 27 03:24:21.395347 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 56028 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:21.397302 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:21.403398 systemd-logind[1554]: New session 12 of user core. May 27 03:24:21.413380 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:24:21.769719 sshd[5235]: Connection closed by 10.0.0.1 port 56028 May 27 03:24:21.770136 sshd-session[5233]: pam_unix(sshd:session): session closed for user core May 27 03:24:21.779876 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:56028.service: Deactivated successfully. May 27 03:24:21.781771 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:24:21.782512 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. May 27 03:24:21.785488 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:56040.service - OpenSSH per-connection server daemon (10.0.0.1:56040). May 27 03:24:21.786288 systemd-logind[1554]: Removed session 12. May 27 03:24:21.834044 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 56040 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:21.835516 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:21.840294 systemd-logind[1554]: New session 13 of user core. May 27 03:24:21.847348 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:24:21.981415 sshd[5249]: Connection closed by 10.0.0.1 port 56040 May 27 03:24:21.981746 sshd-session[5247]: pam_unix(sshd:session): session closed for user core May 27 03:24:21.985392 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:56040.service: Deactivated successfully. May 27 03:24:21.987291 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:24:21.988086 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. May 27 03:24:21.989439 systemd-logind[1554]: Removed session 13. May 27 03:24:23.151677 kubelet[2666]: I0527 03:24:23.151615 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m8zk9" podStartSLOduration=29.109428165 podStartE2EDuration="42.151600179s" podCreationTimestamp="2025-05-27 03:23:41 +0000 UTC" firstStartedPulling="2025-05-27 03:24:07.063000643 +0000 UTC m=+46.235549519" lastFinishedPulling="2025-05-27 03:24:20.105172657 +0000 UTC m=+59.277721533" observedRunningTime="2025-05-27 03:24:21.24828712 +0000 UTC m=+60.420836016" watchObservedRunningTime="2025-05-27 03:24:23.151600179 +0000 UTC m=+62.324149055" May 27 03:24:23.176802 kubelet[2666]: I0527 03:24:23.176724 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:23.211100 containerd[1579]: time="2025-05-27T03:24:23.211010894Z" level=info msg="StopContainer for \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" with timeout 30 (s)" May 27 03:24:23.215929 containerd[1579]: time="2025-05-27T03:24:23.215875511Z" level=info msg="Stop container \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" with signal terminated" May 27 03:24:23.236040 systemd[1]: cri-containerd-5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14.scope: Deactivated successfully. May 27 03:24:23.238467 containerd[1579]: time="2025-05-27T03:24:23.238421853Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" id:\"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" pid:5005 exit_status:1 exited_at:{seconds:1748316263 nanos:237862844}" May 27 03:24:23.238605 containerd[1579]: time="2025-05-27T03:24:23.238509838Z" level=info msg="received exit event container_id:\"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" id:\"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" pid:5005 exit_status:1 exited_at:{seconds:1748316263 nanos:237862844}" May 27 03:24:23.262349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14-rootfs.mount: Deactivated successfully. May 27 03:24:23.279818 containerd[1579]: time="2025-05-27T03:24:23.279765963Z" level=info msg="StopContainer for \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" returns successfully" May 27 03:24:23.280455 containerd[1579]: time="2025-05-27T03:24:23.280430319Z" level=info msg="StopPodSandbox for \"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\"" May 27 03:24:23.296057 containerd[1579]: time="2025-05-27T03:24:23.296009900Z" level=info msg="Container to stop \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:24:23.303780 systemd[1]: cri-containerd-7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e.scope: Deactivated successfully. May 27 03:24:23.305847 containerd[1579]: time="2025-05-27T03:24:23.305808914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\" id:\"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\" pid:4564 exit_status:137 exited_at:{seconds:1748316263 nanos:305101176}" May 27 03:24:23.333478 containerd[1579]: time="2025-05-27T03:24:23.333357531Z" level=info msg="shim disconnected" id=7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e namespace=k8s.io May 27 03:24:23.335740 containerd[1579]: time="2025-05-27T03:24:23.334963784Z" level=warning msg="cleaning up after shim disconnected" id=7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e namespace=k8s.io May 27 03:24:23.335740 containerd[1579]: time="2025-05-27T03:24:23.334984433Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:24:23.335803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e-rootfs.mount: Deactivated successfully. May 27 03:24:23.335925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e-shm.mount: Deactivated successfully. May 27 03:24:23.359102 containerd[1579]: time="2025-05-27T03:24:23.359048032Z" level=info msg="received exit event sandbox_id:\"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\" exit_status:137 exited_at:{seconds:1748316263 nanos:305101176}" May 27 03:24:23.432706 systemd-networkd[1484]: cali8975392843e: Link DOWN May 27 03:24:23.432718 systemd-networkd[1484]: cali8975392843e: Lost carrier May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.430 [INFO][5329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.430 [INFO][5329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" iface="eth0" netns="/var/run/netns/cni-a718fc05-a896-f039-0195-cb185059c4b3" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.431 [INFO][5329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" iface="eth0" netns="/var/run/netns/cni-a718fc05-a896-f039-0195-cb185059c4b3" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.437 [INFO][5329] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" after=6.873356ms iface="eth0" netns="/var/run/netns/cni-a718fc05-a896-f039-0195-cb185059c4b3" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.437 [INFO][5329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.437 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.462 [INFO][5346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" HandleID="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Workload="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.462 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.462 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.504 [INFO][5346] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" HandleID="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Workload="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.504 [INFO][5346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" HandleID="k8s-pod-network.7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" Workload="localhost-k8s-calico--apiserver--8568ddb668--jkxdz-eth0" May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.505 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:24:23.511476 containerd[1579]: 2025-05-27 03:24:23.508 [INFO][5329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e" May 27 03:24:23.515134 systemd[1]: run-netns-cni\x2da718fc05\x2da896\x2df039\x2d0195\x2dcb185059c4b3.mount: Deactivated successfully. May 27 03:24:23.524196 containerd[1579]: time="2025-05-27T03:24:23.524139304Z" level=info msg="TearDown network for sandbox \"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\" successfully" May 27 03:24:23.524196 containerd[1579]: time="2025-05-27T03:24:23.524183437Z" level=info msg="StopPodSandbox for \"7c55a857e6e9d69042baacfaa8d7d12bc61e5332d45d6982eb1450df9dd5286e\" returns successfully" May 27 03:24:23.565448 kubelet[2666]: I0527 03:24:23.565389 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-calico-apiserver-certs\") pod \"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f\" (UID: \"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f\") " May 27 03:24:23.565448 kubelet[2666]: I0527 03:24:23.565450 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4svg6\" (UniqueName: \"kubernetes.io/projected/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-kube-api-access-4svg6\") pod \"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f\" (UID: \"ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f\") " May 27 03:24:23.570367 kubelet[2666]: I0527 03:24:23.570310 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f" (UID: "ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:24:23.571587 systemd[1]: var-lib-kubelet-pods-ecba4ce8\x2dcd3e\x2d4b9d\x2d9c7a\x2d97bf8056f64f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4svg6.mount: Deactivated successfully. May 27 03:24:23.571727 systemd[1]: var-lib-kubelet-pods-ecba4ce8\x2dcd3e\x2d4b9d\x2d9c7a\x2d97bf8056f64f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 27 03:24:23.571872 kubelet[2666]: I0527 03:24:23.571807 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-kube-api-access-4svg6" (OuterVolumeSpecName: "kube-api-access-4svg6") pod "ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f" (UID: "ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f"). InnerVolumeSpecName "kube-api-access-4svg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:24:23.666132 kubelet[2666]: I0527 03:24:23.666056 2666 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 27 03:24:23.666132 kubelet[2666]: I0527 03:24:23.666096 2666 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4svg6\" (UniqueName: \"kubernetes.io/projected/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f-kube-api-access-4svg6\") on node \"localhost\" DevicePath \"\"" May 27 03:24:24.216457 kubelet[2666]: I0527 03:24:24.216422 2666 scope.go:117] "RemoveContainer" containerID="5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14" May 27 03:24:24.219953 containerd[1579]: time="2025-05-27T03:24:24.219880550Z" level=info msg="RemoveContainer for \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\"" May 27 03:24:24.221660 systemd[1]: Removed slice kubepods-besteffort-podecba4ce8_cd3e_4b9d_9c7a_97bf8056f64f.slice - libcontainer container kubepods-besteffort-podecba4ce8_cd3e_4b9d_9c7a_97bf8056f64f.slice. May 27 03:24:24.258838 containerd[1579]: time="2025-05-27T03:24:24.258775561Z" level=info msg="RemoveContainer for \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" returns successfully" May 27 03:24:24.265001 kubelet[2666]: I0527 03:24:24.264947 2666 scope.go:117] "RemoveContainer" containerID="5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14" May 27 03:24:24.265342 containerd[1579]: time="2025-05-27T03:24:24.265279715Z" level=error msg="ContainerStatus for \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\": not found" May 27 03:24:24.265474 kubelet[2666]: E0527 03:24:24.265448 2666 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\": not found" containerID="5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14" May 27 03:24:24.265529 kubelet[2666]: I0527 03:24:24.265482 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14"} err="failed to get container status \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cc335aa43ea3881e982d249f7d88f2086d55838948f542c12010286c9dabf14\": not found" May 27 03:24:24.924642 kubelet[2666]: I0527 03:24:24.924594 2666 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f" path="/var/lib/kubelet/pods/ecba4ce8-cd3e-4b9d-9c7a-97bf8056f64f/volumes" May 27 03:24:26.997257 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:35982.service - OpenSSH per-connection server daemon (10.0.0.1:35982). May 27 03:24:27.054114 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 35982 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:27.056030 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:27.060565 systemd-logind[1554]: New session 14 of user core. May 27 03:24:27.069367 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:24:27.196921 sshd[5374]: Connection closed by 10.0.0.1 port 35982 May 27 03:24:27.197297 sshd-session[5372]: pam_unix(sshd:session): session closed for user core May 27 03:24:27.202779 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:35982.service: Deactivated successfully. May 27 03:24:27.205028 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:24:27.205936 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. May 27 03:24:27.207285 systemd-logind[1554]: Removed session 14. May 27 03:24:27.922995 containerd[1579]: time="2025-05-27T03:24:27.922932049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:28.192600 containerd[1579]: time="2025-05-27T03:24:28.192423304Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:28.242134 containerd[1579]: time="2025-05-27T03:24:28.242056684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:28.242305 containerd[1579]: time="2025-05-27T03:24:28.242097333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:28.242433 kubelet[2666]: E0527 03:24:28.242365 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:28.242811 kubelet[2666]: E0527 03:24:28.242432 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:28.242811 kubelet[2666]: E0527 03:24:28.242593 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ec74fc749bc44b2b952a8b8671ba8094,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57667c847c-4twcl_calico-system(2eb9167b-8371-4b66-b231-77c5171049b6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:28.244596 containerd[1579]: time="2025-05-27T03:24:28.244537938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:28.517611 containerd[1579]: time="2025-05-27T03:24:28.517544425Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:28.588889 containerd[1579]: time="2025-05-27T03:24:28.588824123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:28.589082 containerd[1579]: time="2025-05-27T03:24:28.588940817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:28.590904 kubelet[2666]: E0527 03:24:28.589129 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:28.590904 kubelet[2666]: E0527 03:24:28.589184 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:28.590904 kubelet[2666]: E0527 03:24:28.589362 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57667c847c-4twcl_calico-system(2eb9167b-8371-4b66-b231-77c5171049b6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:28.591244 kubelet[2666]: E0527 03:24:28.591086 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57667c847c-4twcl" podUID="2eb9167b-8371-4b66-b231-77c5171049b6" May 27 03:24:28.923307 containerd[1579]: time="2025-05-27T03:24:28.923147045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:29.226521 containerd[1579]: time="2025-05-27T03:24:29.226342851Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:29.291803 containerd[1579]: time="2025-05-27T03:24:29.291728089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:29.292216 containerd[1579]: time="2025-05-27T03:24:29.292023898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:29.293436 kubelet[2666]: E0527 03:24:29.293364 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:29.293862 kubelet[2666]: E0527 03:24:29.293453 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:29.294899 kubelet[2666]: E0527 03:24:29.294160 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-kxppn_calico-system(cc9060dd-bed3-42dd-955b-36f0d660ea40): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:29.296045 kubelet[2666]: E0527 03:24:29.296003 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kxppn" podUID="cc9060dd-bed3-42dd-955b-36f0d660ea40" May 27 03:24:32.216061 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:35984.service - OpenSSH per-connection server daemon (10.0.0.1:35984). May 27 03:24:32.257276 sshd[5392]: Accepted publickey for core from 10.0.0.1 port 35984 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:32.259288 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:32.264427 systemd-logind[1554]: New session 15 of user core. May 27 03:24:32.271383 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:24:32.392752 sshd[5394]: Connection closed by 10.0.0.1 port 35984 May 27 03:24:32.393102 sshd-session[5392]: pam_unix(sshd:session): session closed for user core May 27 03:24:32.397167 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:35984.service: Deactivated successfully. May 27 03:24:32.399853 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:24:32.401669 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. May 27 03:24:32.403246 systemd-logind[1554]: Removed session 15. May 27 03:24:35.190733 containerd[1579]: time="2025-05-27T03:24:35.190673814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a\" id:\"fdb39734ff83dfe8aae0caea372ca1a86f1ce48e25304164d9bbdeae29c2fd5c\" pid:5420 exited_at:{seconds:1748316275 nanos:190326177}" May 27 03:24:37.408755 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:56566.service - OpenSSH per-connection server daemon (10.0.0.1:56566). May 27 03:24:37.466023 sshd[5435]: Accepted publickey for core from 10.0.0.1 port 56566 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:37.468152 sshd-session[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:37.473019 systemd-logind[1554]: New session 16 of user core. May 27 03:24:37.487443 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:24:37.633736 sshd[5437]: Connection closed by 10.0.0.1 port 56566 May 27 03:24:37.634169 sshd-session[5435]: pam_unix(sshd:session): session closed for user core May 27 03:24:37.638917 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:56566.service: Deactivated successfully. May 27 03:24:37.641309 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:24:37.642335 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. May 27 03:24:37.643599 systemd-logind[1554]: Removed session 16. May 27 03:24:39.924551 kubelet[2666]: E0527 03:24:39.924470 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57667c847c-4twcl" podUID="2eb9167b-8371-4b66-b231-77c5171049b6" May 27 03:24:42.655492 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:56574.service - OpenSSH per-connection server daemon (10.0.0.1:56574). May 27 03:24:42.724549 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 56574 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:42.726866 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:42.732338 systemd-logind[1554]: New session 17 of user core. May 27 03:24:42.738536 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:24:42.871376 sshd[5454]: Connection closed by 10.0.0.1 port 56574 May 27 03:24:42.871835 sshd-session[5452]: pam_unix(sshd:session): session closed for user core May 27 03:24:42.878445 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:56574.service: Deactivated successfully. May 27 03:24:42.880497 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:24:42.881648 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. May 27 03:24:42.884726 systemd-logind[1554]: Removed session 17. May 27 03:24:42.923525 kubelet[2666]: E0527 03:24:42.923320 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kxppn" podUID="cc9060dd-bed3-42dd-955b-36f0d660ea40" May 27 03:24:47.889815 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:51074.service - OpenSSH per-connection server daemon (10.0.0.1:51074). May 27 03:24:47.939370 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 51074 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:47.941136 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:47.946004 systemd-logind[1554]: New session 18 of user core. May 27 03:24:47.953413 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:24:48.071631 sshd[5480]: Connection closed by 10.0.0.1 port 51074 May 27 03:24:48.071967 sshd-session[5478]: pam_unix(sshd:session): session closed for user core May 27 03:24:48.082036 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:51074.service: Deactivated successfully. May 27 03:24:48.084351 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:24:48.089013 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. May 27 03:24:48.090286 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:51082.service - OpenSSH per-connection server daemon (10.0.0.1:51082). May 27 03:24:48.096069 systemd-logind[1554]: Removed session 18. May 27 03:24:48.148102 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 51082 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:48.149792 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:48.155144 systemd-logind[1554]: New session 19 of user core. May 27 03:24:48.164425 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:24:48.472758 sshd[5496]: Connection closed by 10.0.0.1 port 51082 May 27 03:24:48.473451 sshd-session[5494]: pam_unix(sshd:session): session closed for user core May 27 03:24:48.483619 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:51082.service: Deactivated successfully. May 27 03:24:48.485910 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:24:48.487359 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. May 27 03:24:48.490355 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:51088.service - OpenSSH per-connection server daemon (10.0.0.1:51088). May 27 03:24:48.491410 systemd-logind[1554]: Removed session 19. May 27 03:24:48.557130 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 51088 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:48.558822 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:48.563621 systemd-logind[1554]: New session 20 of user core. May 27 03:24:48.572372 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:24:49.236379 containerd[1579]: time="2025-05-27T03:24:49.236315585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0\" id:\"5e801da95ca8fa85ddcace237c4feb0c71e02bf3e56cacaaec94d80b4fa9975a\" pid:5533 exited_at:{seconds:1748316289 nanos:235916204}" May 27 03:24:49.380242 sshd[5510]: Connection closed by 10.0.0.1 port 51088 May 27 03:24:49.381677 sshd-session[5508]: pam_unix(sshd:session): session closed for user core May 27 03:24:49.393020 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:51088.service: Deactivated successfully. May 27 03:24:49.396410 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:24:49.397413 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. May 27 03:24:49.401792 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:51104.service - OpenSSH per-connection server daemon (10.0.0.1:51104). May 27 03:24:49.404056 systemd-logind[1554]: Removed session 20. May 27 03:24:49.467421 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 51104 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:49.469230 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:49.474085 systemd-logind[1554]: New session 21 of user core. May 27 03:24:49.484369 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:24:49.743124 sshd[5554]: Connection closed by 10.0.0.1 port 51104 May 27 03:24:49.743672 sshd-session[5552]: pam_unix(sshd:session): session closed for user core May 27 03:24:49.757542 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:51104.service: Deactivated successfully. May 27 03:24:49.759943 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:24:49.761153 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. May 27 03:24:49.765868 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:51106.service - OpenSSH per-connection server daemon (10.0.0.1:51106). May 27 03:24:49.766736 systemd-logind[1554]: Removed session 21. May 27 03:24:49.823444 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 51106 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:49.825426 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:49.830793 systemd-logind[1554]: New session 22 of user core. May 27 03:24:49.842406 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:24:49.966712 sshd[5567]: Connection closed by 10.0.0.1 port 51106 May 27 03:24:49.967081 sshd-session[5565]: pam_unix(sshd:session): session closed for user core May 27 03:24:49.972755 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:51106.service: Deactivated successfully. May 27 03:24:49.975306 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:24:49.976735 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. May 27 03:24:49.978976 systemd-logind[1554]: Removed session 22. May 27 03:24:50.839032 kubelet[2666]: I0527 03:24:50.838955 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:24:50.923880 containerd[1579]: time="2025-05-27T03:24:50.923800619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:24:51.220756 containerd[1579]: time="2025-05-27T03:24:51.220622675Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:51.243340 containerd[1579]: time="2025-05-27T03:24:51.243191487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:51.243340 containerd[1579]: time="2025-05-27T03:24:51.243242994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:24:51.243565 kubelet[2666]: E0527 03:24:51.243458 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:51.243565 kubelet[2666]: E0527 03:24:51.243506 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:24:51.243697 kubelet[2666]: E0527 03:24:51.243629 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ec74fc749bc44b2b952a8b8671ba8094,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gpttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57667c847c-4twcl_calico-system(2eb9167b-8371-4b66-b231-77c5171049b6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:51.246162 containerd[1579]: time="2025-05-27T03:24:51.246135381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:24:51.512033 containerd[1579]: time="2025-05-27T03:24:51.511982647Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:51.513060 containerd[1579]: time="2025-05-27T03:24:51.513026244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:51.513172 containerd[1579]: time="2025-05-27T03:24:51.513052614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:24:51.513307 kubelet[2666]: E0527 03:24:51.513254 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:51.513418 kubelet[2666]: E0527 03:24:51.513311 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:24:51.513483 kubelet[2666]: E0527 03:24:51.513441 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57667c847c-4twcl_calico-system(2eb9167b-8371-4b66-b231-77c5171049b6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:51.514652 kubelet[2666]: E0527 03:24:51.514612 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57667c847c-4twcl" podUID="2eb9167b-8371-4b66-b231-77c5171049b6" May 27 03:24:53.923598 containerd[1579]: time="2025-05-27T03:24:53.923542325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:24:54.260712 containerd[1579]: time="2025-05-27T03:24:54.260662016Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:24:54.263013 containerd[1579]: time="2025-05-27T03:24:54.262975766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:24:54.263143 containerd[1579]: time="2025-05-27T03:24:54.263006384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:24:54.263298 kubelet[2666]: E0527 03:24:54.263252 2666 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:54.263681 kubelet[2666]: E0527 03:24:54.263313 2666 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:24:54.263681 kubelet[2666]: E0527 03:24:54.263507 2666 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv7t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-kxppn_calico-system(cc9060dd-bed3-42dd-955b-36f0d660ea40): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:24:54.264977 kubelet[2666]: E0527 03:24:54.264929 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kxppn" podUID="cc9060dd-bed3-42dd-955b-36f0d660ea40" May 27 03:24:54.983782 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:34302.service - OpenSSH per-connection server daemon (10.0.0.1:34302). May 27 03:24:55.026729 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 34302 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:24:55.028621 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:24:55.033617 systemd-logind[1554]: New session 23 of user core. May 27 03:24:55.042338 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:24:55.171533 sshd[5587]: Connection closed by 10.0.0.1 port 34302 May 27 03:24:55.172100 sshd-session[5585]: pam_unix(sshd:session): session closed for user core May 27 03:24:55.177469 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:34302.service: Deactivated successfully. May 27 03:24:55.180258 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:24:55.181460 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. May 27 03:24:55.183055 systemd-logind[1554]: Removed session 23. May 27 03:25:00.185624 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:34316.service - OpenSSH per-connection server daemon (10.0.0.1:34316). May 27 03:25:00.260909 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 34316 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:25:00.263146 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:00.270176 systemd-logind[1554]: New session 24 of user core. May 27 03:25:00.281334 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:25:00.405185 sshd[5605]: Connection closed by 10.0.0.1 port 34316 May 27 03:25:00.405482 sshd-session[5603]: pam_unix(sshd:session): session closed for user core May 27 03:25:00.409719 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:34316.service: Deactivated successfully. May 27 03:25:00.412020 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:25:00.412966 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. May 27 03:25:00.414296 systemd-logind[1554]: Removed session 24. May 27 03:25:01.075690 containerd[1579]: time="2025-05-27T03:25:01.075636967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"358497ef0bf33b34bd3cd6a1a064803e99ee6e27fe94b703f7afaefa5900e7b0\" id:\"aa8c578cb36af658d48949e6f864d0e955b96d59bffac678974821c053546024\" pid:5631 exited_at:{seconds:1748316301 nanos:75453027}" May 27 03:25:04.924234 kubelet[2666]: E0527 03:25:04.924109 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-kxppn" podUID="cc9060dd-bed3-42dd-955b-36f0d660ea40" May 27 03:25:04.926232 kubelet[2666]: E0527 03:25:04.924778 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-57667c847c-4twcl" podUID="2eb9167b-8371-4b66-b231-77c5171049b6" May 27 03:25:05.125579 containerd[1579]: time="2025-05-27T03:25:05.125520641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58d8d6f8493a2ac2460c30779a5cbec6181a209e4b6fa2a6c8c70c7df35dc76a\" id:\"c78f73297b91875b591c923279722a78afd61679cbc4796f8d5b8475286db23b\" pid:5654 exited_at:{seconds:1748316305 nanos:125085997}" May 27 03:25:05.421952 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:57910.service - OpenSSH per-connection server daemon (10.0.0.1:57910). May 27 03:25:05.496340 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 57910 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:25:05.497832 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:25:05.504080 systemd-logind[1554]: New session 25 of user core. May 27 03:25:05.519455 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:25:05.650152 sshd[5670]: Connection closed by 10.0.0.1 port 57910 May 27 03:25:05.650559 sshd-session[5668]: pam_unix(sshd:session): session closed for user core May 27 03:25:05.656593 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:57910.service: Deactivated successfully. May 27 03:25:05.657552 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. May 27 03:25:05.659618 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:25:05.661949 systemd-logind[1554]: Removed session 25. May 27 03:25:07.647539 containerd[1579]: time="2025-05-27T03:25:07.647344657Z" level=info msg="StopContainer for \"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" with timeout 30 (s)" May 27 03:25:07.650254 containerd[1579]: time="2025-05-27T03:25:07.649536090Z" level=info msg="Stop container \"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" with signal terminated" May 27 03:25:07.668142 systemd[1]: cri-containerd-de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453.scope: Deactivated successfully. May 27 03:25:07.668889 systemd[1]: cri-containerd-de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453.scope: Consumed 1.012s CPU time, 43.7M memory peak. May 27 03:25:07.670785 containerd[1579]: time="2025-05-27T03:25:07.670734453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" id:\"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" pid:4964 exit_status:1 exited_at:{seconds:1748316307 nanos:670190372}" May 27 03:25:07.670879 containerd[1579]: time="2025-05-27T03:25:07.670842517Z" level=info msg="received exit event container_id:\"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" id:\"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" pid:4964 exit_status:1 exited_at:{seconds:1748316307 nanos:670190372}" May 27 03:25:07.706525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453-rootfs.mount: Deactivated successfully. May 27 03:25:07.727590 containerd[1579]: time="2025-05-27T03:25:07.727534442Z" level=info msg="StopContainer for \"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" returns successfully" May 27 03:25:07.728217 containerd[1579]: time="2025-05-27T03:25:07.728177852Z" level=info msg="StopPodSandbox for \"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\"" May 27 03:25:07.728303 containerd[1579]: time="2025-05-27T03:25:07.728278723Z" level=info msg="Container to stop \"de988fac7514f3e436237b4a1be3a78104d5696454bce8c7b17729936038b453\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:25:07.737887 systemd[1]: cri-containerd-b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a.scope: Deactivated successfully. May 27 03:25:07.740284 containerd[1579]: time="2025-05-27T03:25:07.739486282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\" id:\"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\" pid:4387 exit_status:137 exited_at:{seconds:1748316307 nanos:738166400}" May 27 03:25:07.783043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a-rootfs.mount: Deactivated successfully. May 27 03:25:07.784692 containerd[1579]: time="2025-05-27T03:25:07.784641484Z" level=info msg="shim disconnected" id=b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a namespace=k8s.io May 27 03:25:07.784692 containerd[1579]: time="2025-05-27T03:25:07.784690637Z" level=warning msg="cleaning up after shim disconnected" id=b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a namespace=k8s.io May 27 03:25:07.784796 containerd[1579]: time="2025-05-27T03:25:07.784700656Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:25:07.814615 containerd[1579]: time="2025-05-27T03:25:07.813955496Z" level=info msg="received exit event sandbox_id:\"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\" exit_status:137 exited_at:{seconds:1748316307 nanos:738166400}" May 27 03:25:07.818296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a-shm.mount: Deactivated successfully. May 27 03:25:07.945127 systemd-networkd[1484]: cali00772f80b6b: Link DOWN May 27 03:25:07.945138 systemd-networkd[1484]: cali00772f80b6b: Lost carrier May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.941 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.943 [INFO][5758] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" iface="eth0" netns="/var/run/netns/cni-e48ecbe0-a7b2-c689-18c1-dfeec837027b" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.944 [INFO][5758] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" iface="eth0" netns="/var/run/netns/cni-e48ecbe0-a7b2-c689-18c1-dfeec837027b" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.952 [INFO][5758] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" after=8.365003ms iface="eth0" netns="/var/run/netns/cni-e48ecbe0-a7b2-c689-18c1-dfeec837027b" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.952 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.952 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.984 [INFO][5769] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" HandleID="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Workload="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.984 [INFO][5769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:07.985 [INFO][5769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:08.018 [INFO][5769] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" HandleID="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Workload="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:08.018 [INFO][5769] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" HandleID="k8s-pod-network.b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" Workload="localhost-k8s-calico--apiserver--8568ddb668--m4k58-eth0" May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:08.019 [INFO][5769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:25:08.025923 containerd[1579]: 2025-05-27 03:25:08.022 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a" May 27 03:25:08.027697 containerd[1579]: time="2025-05-27T03:25:08.027525655Z" level=info msg="TearDown network for sandbox \"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\" successfully" May 27 03:25:08.027697 containerd[1579]: time="2025-05-27T03:25:08.027557996Z" level=info msg="StopPodSandbox for \"b465d6c9d1d566a0efc1aa9b5497a3ba168b68f21390125082c5900f52b3136a\" returns successfully" May 27 03:25:08.028948 systemd[1]: run-netns-cni\x2de48ecbe0\x2da7b2\x2dc689\x2d18c1\x2ddfeec837027b.mount: Deactivated successfully. May 27 03:25:08.169156 kubelet[2666]: I0527 03:25:08.169081 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32672c55-4bd7-4610-9392-e91638e32b95-calico-apiserver-certs\") pod \"32672c55-4bd7-4610-9392-e91638e32b95\" (UID: \"32672c55-4bd7-4610-9392-e91638e32b95\") " May 27 03:25:08.169684 kubelet[2666]: I0527 03:25:08.169151 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p58gc\" (UniqueName: \"kubernetes.io/projected/32672c55-4bd7-4610-9392-e91638e32b95-kube-api-access-p58gc\") pod \"32672c55-4bd7-4610-9392-e91638e32b95\" (UID: \"32672c55-4bd7-4610-9392-e91638e32b95\") " May 27 03:25:08.172869 kubelet[2666]: I0527 03:25:08.172823 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32672c55-4bd7-4610-9392-e91638e32b95-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "32672c55-4bd7-4610-9392-e91638e32b95" (UID: "32672c55-4bd7-4610-9392-e91638e32b95"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:25:08.172989 kubelet[2666]: I0527 03:25:08.172888 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32672c55-4bd7-4610-9392-e91638e32b95-kube-api-access-p58gc" (OuterVolumeSpecName: "kube-api-access-p58gc") pod "32672c55-4bd7-4610-9392-e91638e32b95" (UID: "32672c55-4bd7-4610-9392-e91638e32b95"). InnerVolumeSpecName "kube-api-access-p58gc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:25:08.175841 systemd[1]: var-lib-kubelet-pods-32672c55\x2d4bd7\x2d4610\x2d9392\x2de91638e32b95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp58gc.mount: Deactivated successfully. May 27 03:25:08.175992 systemd[1]: var-lib-kubelet-pods-32672c55\x2d4bd7\x2d4610\x2d9392\x2de91638e32b95-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully.