May 27 03:15:50.838248 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:15:50.838270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:15:50.838280 kernel: BIOS-provided physical RAM map: May 27 03:15:50.838287 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:15:50.838294 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:15:50.838300 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:15:50.838308 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:15:50.838314 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:15:50.838322 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:15:50.838329 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:15:50.838335 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 27 03:15:50.838342 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:15:50.838348 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:15:50.838355 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:15:50.838365 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:15:50.838372 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:15:50.838379 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:15:50.838385 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:15:50.838392 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:15:50.838399 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:15:50.838406 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:15:50.838413 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:15:50.838419 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:15:50.838426 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:15:50.838433 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:15:50.838441 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:15:50.838448 kernel: NX (Execute Disable) protection: active May 27 03:15:50.838455 kernel: APIC: Static calls initialized May 27 03:15:50.838462 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 27 03:15:50.838469 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 27 03:15:50.838476 kernel: extended physical RAM map: May 27 03:15:50.838483 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 27 03:15:50.838502 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 27 03:15:50.838510 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 27 03:15:50.838516 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 27 03:15:50.838523 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 27 03:15:50.838532 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 27 03:15:50.838539 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 27 03:15:50.838546 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 27 03:15:50.838553 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 27 03:15:50.838563 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 27 03:15:50.838570 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 27 03:15:50.838579 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 27 03:15:50.838587 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 27 03:15:50.838594 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 27 03:15:50.838601 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 27 03:15:50.838608 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 27 03:15:50.838615 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 27 03:15:50.838623 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 27 03:15:50.838630 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 27 03:15:50.838637 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 27 03:15:50.838646 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 27 03:15:50.838653 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 27 03:15:50.838660 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 27 03:15:50.838667 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 27 03:15:50.838675 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:15:50.838682 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 27 03:15:50.838689 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:15:50.838696 kernel: efi: EFI v2.7 by EDK II May 27 03:15:50.838703 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 27 03:15:50.838710 kernel: random: crng init done May 27 03:15:50.838718 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 27 03:15:50.838725 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 27 03:15:50.838734 kernel: secureboot: Secure boot disabled May 27 03:15:50.838741 kernel: SMBIOS 2.8 present. May 27 03:15:50.838755 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 27 03:15:50.838762 kernel: DMI: Memory slots populated: 1/1 May 27 03:15:50.838769 kernel: Hypervisor detected: KVM May 27 03:15:50.838776 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:15:50.838783 kernel: kvm-clock: using sched offset of 3724018459 cycles May 27 03:15:50.838791 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:15:50.838799 kernel: tsc: Detected 2794.748 MHz processor May 27 03:15:50.838806 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:15:50.838814 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:15:50.838823 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 27 03:15:50.838830 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 27 03:15:50.838838 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:15:50.838845 kernel: Using GB pages for direct mapping May 27 03:15:50.838852 kernel: ACPI: Early table checksum verification disabled May 27 03:15:50.838860 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 27 03:15:50.838867 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 27 03:15:50.838875 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:50.838882 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:50.838892 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 27 03:15:50.838899 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:50.838906 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:50.838914 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:50.838921 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:15:50.838928 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 27 03:15:50.838936 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 27 03:15:50.838943 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] May 27 03:15:50.838953 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 27 03:15:50.838960 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 27 03:15:50.838967 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 27 03:15:50.838974 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 27 03:15:50.838982 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 27 03:15:50.838989 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 27 03:15:50.838996 kernel: No NUMA configuration found May 27 03:15:50.839004 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 27 03:15:50.839012 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 27 03:15:50.839021 kernel: Zone ranges: May 27 03:15:50.839031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:15:50.839040 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 27 03:15:50.839047 kernel: Normal empty May 27 03:15:50.839055 kernel: Device empty May 27 03:15:50.839062 kernel: Movable zone start for each node May 27 03:15:50.839069 kernel: Early memory node ranges May 27 03:15:50.839076 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 27 03:15:50.839084 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 27 03:15:50.839091 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 27 03:15:50.839100 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 27 03:15:50.839107 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 27 03:15:50.839115 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 27 03:15:50.839122 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 27 03:15:50.839129 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 27 03:15:50.839136 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 27 03:15:50.839143 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:15:50.839151 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 27 03:15:50.839167 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 27 03:15:50.839174 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:15:50.839182 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 27 03:15:50.839189 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 27 03:15:50.839198 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 27 03:15:50.839206 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 27 03:15:50.839213 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 27 03:15:50.839221 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:15:50.839229 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:15:50.839239 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:15:50.839246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:15:50.839254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:15:50.839262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:15:50.839269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:15:50.839277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:15:50.839284 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:15:50.839292 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:15:50.839299 kernel: TSC deadline timer available May 27 03:15:50.839309 kernel: CPU topo: Max. logical packages: 1 May 27 03:15:50.839316 kernel: CPU topo: Max. logical dies: 1 May 27 03:15:50.839324 kernel: CPU topo: Max. dies per package: 1 May 27 03:15:50.839331 kernel: CPU topo: Max. threads per core: 1 May 27 03:15:50.839339 kernel: CPU topo: Num. cores per package: 4 May 27 03:15:50.839346 kernel: CPU topo: Num. threads per package: 4 May 27 03:15:50.839354 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 27 03:15:50.839361 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:15:50.839369 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:15:50.839376 kernel: kvm-guest: setup PV sched yield May 27 03:15:50.839386 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 27 03:15:50.839394 kernel: Booting paravirtualized kernel on KVM May 27 03:15:50.839401 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:15:50.839409 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 27 03:15:50.839417 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 27 03:15:50.839425 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 27 03:15:50.839432 kernel: pcpu-alloc: [0] 0 1 2 3 May 27 03:15:50.839440 kernel: kvm-guest: PV spinlocks enabled May 27 03:15:50.839448 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:15:50.839458 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:15:50.839466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:15:50.839474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:15:50.839482 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:15:50.839556 kernel: Fallback order for Node 0: 0 May 27 03:15:50.839567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 27 03:15:50.839577 kernel: Policy zone: DMA32 May 27 03:15:50.839587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:15:50.839598 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 03:15:50.839606 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:15:50.839614 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:15:50.839621 kernel: Dynamic Preempt: voluntary May 27 03:15:50.839629 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:15:50.839640 kernel: rcu: RCU event tracing is enabled. May 27 03:15:50.839648 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 03:15:50.839656 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:15:50.839664 kernel: Rude variant of Tasks RCU enabled. May 27 03:15:50.839674 kernel: Tracing variant of Tasks RCU enabled. May 27 03:15:50.839681 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:15:50.839689 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 03:15:50.839697 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:15:50.839704 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:15:50.839712 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 03:15:50.839720 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 27 03:15:50.839727 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:15:50.839735 kernel: Console: colour dummy device 80x25 May 27 03:15:50.839753 kernel: printk: legacy console [ttyS0] enabled May 27 03:15:50.839761 kernel: ACPI: Core revision 20240827 May 27 03:15:50.839769 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:15:50.839776 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:15:50.839784 kernel: x2apic enabled May 27 03:15:50.839791 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:15:50.839799 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:15:50.839807 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:15:50.839815 kernel: kvm-guest: setup PV IPIs May 27 03:15:50.839824 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:15:50.839832 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:15:50.839840 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 27 03:15:50.839848 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:15:50.839856 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:15:50.839863 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:15:50.839871 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:15:50.839878 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:15:50.839887 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:15:50.839900 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 27 03:15:50.839911 kernel: RETBleed: Mitigation: untrained return thunk May 27 03:15:50.839921 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:15:50.839929 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:15:50.839937 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:15:50.839945 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:15:50.839953 kernel: x86/bugs: return thunk changed May 27 03:15:50.839961 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:15:50.839971 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:15:50.839978 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:15:50.839986 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:15:50.839994 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:15:50.840003 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 27 03:15:50.840013 kernel: Freeing SMP alternatives memory: 32K May 27 03:15:50.840023 kernel: pid_max: default: 32768 minimum: 301 May 27 03:15:50.840030 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:15:50.840038 kernel: landlock: Up and running. May 27 03:15:50.840048 kernel: SELinux: Initializing. May 27 03:15:50.840055 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:15:50.840063 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:15:50.840071 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 27 03:15:50.840078 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:15:50.840086 kernel: ... version: 0 May 27 03:15:50.840093 kernel: ... bit width: 48 May 27 03:15:50.840101 kernel: ... generic registers: 6 May 27 03:15:50.840108 kernel: ... value mask: 0000ffffffffffff May 27 03:15:50.840118 kernel: ... max period: 00007fffffffffff May 27 03:15:50.840125 kernel: ... fixed-purpose events: 0 May 27 03:15:50.840133 kernel: ... event mask: 000000000000003f May 27 03:15:50.840140 kernel: signal: max sigframe size: 1776 May 27 03:15:50.840148 kernel: rcu: Hierarchical SRCU implementation. May 27 03:15:50.840156 kernel: rcu: Max phase no-delay instances is 400. May 27 03:15:50.840163 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:15:50.840171 kernel: smp: Bringing up secondary CPUs ... May 27 03:15:50.840179 kernel: smpboot: x86: Booting SMP configuration: May 27 03:15:50.840188 kernel: .... node #0, CPUs: #1 #2 #3 May 27 03:15:50.840196 kernel: smp: Brought up 1 node, 4 CPUs May 27 03:15:50.840203 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 27 03:15:50.840211 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 137196K reserved, 0K cma-reserved) May 27 03:15:50.840219 kernel: devtmpfs: initialized May 27 03:15:50.840226 kernel: x86/mm: Memory block size: 128MB May 27 03:15:50.840234 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 27 03:15:50.840242 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 27 03:15:50.840250 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 27 03:15:50.840260 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 27 03:15:50.840267 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 27 03:15:50.840275 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 27 03:15:50.840283 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:15:50.840290 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 03:15:50.840298 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:15:50.840306 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:15:50.840313 kernel: audit: initializing netlink subsys (disabled) May 27 03:15:50.840321 kernel: audit: type=2000 audit(1748315748.200:1): state=initialized audit_enabled=0 res=1 May 27 03:15:50.840330 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:15:50.840338 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:15:50.840345 kernel: cpuidle: using governor menu May 27 03:15:50.840353 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:15:50.840361 kernel: dca service started, version 1.12.1 May 27 03:15:50.840368 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 27 03:15:50.840376 kernel: PCI: Using configuration type 1 for base access May 27 03:15:50.840384 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:15:50.840391 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:15:50.840401 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:15:50.840409 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:15:50.840416 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:15:50.840424 kernel: ACPI: Added _OSI(Module Device) May 27 03:15:50.840431 kernel: ACPI: Added _OSI(Processor Device) May 27 03:15:50.840439 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:15:50.840447 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:15:50.840454 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:15:50.840462 kernel: ACPI: Interpreter enabled May 27 03:15:50.840471 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:15:50.840479 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:15:50.840499 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:15:50.840507 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:15:50.840514 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:15:50.840522 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:15:50.840702 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:15:50.840834 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:15:50.840953 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:15:50.840963 kernel: PCI host bridge to bus 0000:00 May 27 03:15:50.841085 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:15:50.841191 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:15:50.841299 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:15:50.841403 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 27 03:15:50.841521 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 27 03:15:50.841634 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 27 03:15:50.841755 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:15:50.841894 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:15:50.842021 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:15:50.842143 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 27 03:15:50.842260 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 27 03:15:50.842378 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 27 03:15:50.842510 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:15:50.842670 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 03:15:50.842798 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 27 03:15:50.842921 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 27 03:15:50.843037 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 27 03:15:50.843159 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 03:15:50.843288 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 27 03:15:50.843413 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 27 03:15:50.843544 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 27 03:15:50.843669 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:15:50.843795 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 27 03:15:50.843911 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 27 03:15:50.844025 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 27 03:15:50.844144 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 27 03:15:50.844272 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:15:50.844387 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:15:50.844534 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:15:50.844685 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 27 03:15:50.844832 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 27 03:15:50.844980 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:15:50.845129 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 27 03:15:50.845146 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:15:50.845160 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:15:50.845173 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:15:50.845185 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:15:50.845200 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:15:50.845213 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:15:50.845229 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:15:50.845242 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:15:50.845254 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:15:50.845267 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:15:50.845279 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:15:50.845290 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:15:50.845300 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:15:50.845309 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:15:50.845320 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:15:50.845331 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:15:50.845341 kernel: iommu: Default domain type: Translated May 27 03:15:50.845350 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:15:50.845359 kernel: efivars: Registered efivars operations May 27 03:15:50.845368 kernel: PCI: Using ACPI for IRQ routing May 27 03:15:50.845377 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:15:50.845387 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 27 03:15:50.845396 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 27 03:15:50.845406 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 27 03:15:50.845416 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 27 03:15:50.845428 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 27 03:15:50.845437 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 27 03:15:50.845447 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 27 03:15:50.845456 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 27 03:15:50.845614 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:15:50.845733 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:15:50.845864 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:15:50.845881 kernel: vgaarb: loaded May 27 03:15:50.845892 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:15:50.845902 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:15:50.845912 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:15:50.845920 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:15:50.845928 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:15:50.845936 kernel: pnp: PnP ACPI init May 27 03:15:50.846105 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 27 03:15:50.846126 kernel: pnp: PnP ACPI: found 6 devices May 27 03:15:50.846136 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:15:50.846144 kernel: NET: Registered PF_INET protocol family May 27 03:15:50.846152 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:15:50.846160 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:15:50.846171 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:15:50.846182 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:15:50.846193 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:15:50.846201 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:15:50.846211 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:15:50.846219 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:15:50.846227 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:15:50.846235 kernel: NET: Registered PF_XDP protocol family May 27 03:15:50.846362 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 27 03:15:50.846480 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 27 03:15:50.846607 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:15:50.846712 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:15:50.846842 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:15:50.846949 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 27 03:15:50.847053 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 27 03:15:50.847157 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 27 03:15:50.847167 kernel: PCI: CLS 0 bytes, default 64 May 27 03:15:50.847176 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 27 03:15:50.847184 kernel: Initialise system trusted keyrings May 27 03:15:50.847192 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:15:50.847200 kernel: Key type asymmetric registered May 27 03:15:50.847211 kernel: Asymmetric key parser 'x509' registered May 27 03:15:50.847219 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:15:50.847228 kernel: io scheduler mq-deadline registered May 27 03:15:50.847235 kernel: io scheduler kyber registered May 27 03:15:50.847243 kernel: io scheduler bfq registered May 27 03:15:50.847253 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:15:50.847264 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:15:50.847272 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:15:50.847280 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 03:15:50.847290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:15:50.847299 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:15:50.847307 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:15:50.847315 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:15:50.847323 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:15:50.847442 kernel: rtc_cmos 00:04: RTC can wake from S4 May 27 03:15:50.847457 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:15:50.847595 kernel: rtc_cmos 00:04: registered as rtc0 May 27 03:15:50.847706 kernel: rtc_cmos 00:04: setting system clock to 2025-05-27T03:15:50 UTC (1748315750) May 27 03:15:50.847825 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 27 03:15:50.847836 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:15:50.847844 kernel: efifb: probing for efifb May 27 03:15:50.847852 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 27 03:15:50.847863 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 27 03:15:50.847871 kernel: efifb: scrolling: redraw May 27 03:15:50.847880 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 27 03:15:50.847891 kernel: Console: switching to colour frame buffer device 160x50 May 27 03:15:50.847902 kernel: fb0: EFI VGA frame buffer device May 27 03:15:50.847913 kernel: pstore: Using crash dump compression: deflate May 27 03:15:50.847922 kernel: pstore: Registered efi_pstore as persistent store backend May 27 03:15:50.847930 kernel: NET: Registered PF_INET6 protocol family May 27 03:15:50.847938 kernel: Segment Routing with IPv6 May 27 03:15:50.847948 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:15:50.847957 kernel: NET: Registered PF_PACKET protocol family May 27 03:15:50.847965 kernel: Key type dns_resolver registered May 27 03:15:50.847973 kernel: IPI shorthand broadcast: enabled May 27 03:15:50.847981 kernel: sched_clock: Marking stable (2896002638, 166968298)->(3081192586, -18221650) May 27 03:15:50.847989 kernel: registered taskstats version 1 May 27 03:15:50.847997 kernel: Loading compiled-in X.509 certificates May 27 03:15:50.848005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:15:50.848013 kernel: Demotion targets for Node 0: null May 27 03:15:50.848021 kernel: Key type .fscrypt registered May 27 03:15:50.848031 kernel: Key type fscrypt-provisioning registered May 27 03:15:50.848041 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:15:50.848052 kernel: ima: Allocated hash algorithm: sha1 May 27 03:15:50.848063 kernel: ima: No architecture policies found May 27 03:15:50.848071 kernel: clk: Disabling unused clocks May 27 03:15:50.848079 kernel: Warning: unable to open an initial console. May 27 03:15:50.848087 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:15:50.848095 kernel: Write protecting the kernel read-only data: 24576k May 27 03:15:50.848106 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:15:50.848114 kernel: Run /init as init process May 27 03:15:50.848122 kernel: with arguments: May 27 03:15:50.848130 kernel: /init May 27 03:15:50.848138 kernel: with environment: May 27 03:15:50.848146 kernel: HOME=/ May 27 03:15:50.848154 kernel: TERM=linux May 27 03:15:50.848162 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:15:50.848171 systemd[1]: Successfully made /usr/ read-only. May 27 03:15:50.848185 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:15:50.848194 systemd[1]: Detected virtualization kvm. May 27 03:15:50.848203 systemd[1]: Detected architecture x86-64. May 27 03:15:50.848211 systemd[1]: Running in initrd. May 27 03:15:50.848220 systemd[1]: No hostname configured, using default hostname. May 27 03:15:50.848228 systemd[1]: Hostname set to . May 27 03:15:50.848237 systemd[1]: Initializing machine ID from VM UUID. May 27 03:15:50.848247 systemd[1]: Queued start job for default target initrd.target. May 27 03:15:50.848256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:15:50.848265 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:15:50.848274 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:15:50.848283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:15:50.848291 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:15:50.848301 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:15:50.848313 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:15:50.848322 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:15:50.848331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:15:50.848339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:15:50.848348 systemd[1]: Reached target paths.target - Path Units. May 27 03:15:50.848356 systemd[1]: Reached target slices.target - Slice Units. May 27 03:15:50.848365 systemd[1]: Reached target swap.target - Swaps. May 27 03:15:50.848374 systemd[1]: Reached target timers.target - Timer Units. May 27 03:15:50.848382 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:15:50.848393 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:15:50.848402 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:15:50.848410 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:15:50.848419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:15:50.848427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:15:50.848436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:15:50.848444 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:15:50.848453 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:15:50.848464 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:15:50.848472 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:15:50.848481 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:15:50.848503 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:15:50.848512 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:15:50.848521 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:15:50.848529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:50.848538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:15:50.848551 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:15:50.848562 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:15:50.848574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:15:50.848605 systemd-journald[219]: Collecting audit messages is disabled. May 27 03:15:50.848628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:50.848637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:15:50.848646 systemd-journald[219]: Journal started May 27 03:15:50.848670 systemd-journald[219]: Runtime Journal (/run/log/journal/2204d351daba4bc4bcaebee9d749cd55) is 6M, max 48.5M, 42.4M free. May 27 03:15:50.835532 systemd-modules-load[221]: Inserted module 'overlay' May 27 03:15:50.853511 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:15:50.862526 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:15:50.865116 systemd-modules-load[221]: Inserted module 'br_netfilter' May 27 03:15:50.866182 kernel: Bridge firewalling registered May 27 03:15:50.874710 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:15:50.875112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:15:50.877697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:15:50.878876 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:15:50.882607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:15:50.899205 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:15:50.900679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:15:50.901346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:15:50.905074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:15:50.907267 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:15:50.915191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:15:50.917354 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:15:50.949983 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:15:50.950289 systemd-resolved[259]: Positive Trust Anchors: May 27 03:15:50.950300 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:15:50.950332 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:15:50.952953 systemd-resolved[259]: Defaulting to hostname 'linux'. May 27 03:15:50.953986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:15:50.956349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:15:51.059522 kernel: SCSI subsystem initialized May 27 03:15:51.068509 kernel: Loading iSCSI transport class v2.0-870. May 27 03:15:51.079529 kernel: iscsi: registered transport (tcp) May 27 03:15:51.101527 kernel: iscsi: registered transport (qla4xxx) May 27 03:15:51.101592 kernel: QLogic iSCSI HBA Driver May 27 03:15:51.124546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:15:51.142958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:15:51.144878 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:15:51.207643 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:15:51.209604 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:15:51.273541 kernel: raid6: avx2x4 gen() 29490 MB/s May 27 03:15:51.290548 kernel: raid6: avx2x2 gen() 30026 MB/s May 27 03:15:51.307641 kernel: raid6: avx2x1 gen() 25570 MB/s May 27 03:15:51.307685 kernel: raid6: using algorithm avx2x2 gen() 30026 MB/s May 27 03:15:51.325653 kernel: raid6: .... xor() 19598 MB/s, rmw enabled May 27 03:15:51.325692 kernel: raid6: using avx2x2 recovery algorithm May 27 03:15:51.346529 kernel: xor: automatically using best checksumming function avx May 27 03:15:51.517522 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:15:51.525353 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:15:51.527540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:15:51.560846 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 27 03:15:51.566377 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:15:51.568064 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:15:51.595022 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation May 27 03:15:51.625894 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:15:51.629643 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:15:51.710783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:15:51.716081 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:15:51.744551 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 27 03:15:51.747007 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 03:15:51.752393 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:15:51.752422 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:15:51.752437 kernel: GPT:9289727 != 19775487 May 27 03:15:51.753157 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:15:51.753180 kernel: GPT:9289727 != 19775487 May 27 03:15:51.754719 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:15:51.754748 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:51.769707 kernel: AES CTR mode by8 optimization enabled May 27 03:15:51.773512 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:15:51.795531 kernel: libata version 3.00 loaded. May 27 03:15:51.806333 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:15:51.807651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:51.812206 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:51.815522 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:15:51.817524 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:15:51.819678 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:15:51.819883 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:15:51.820021 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:15:51.818838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:51.824528 kernel: scsi host0: ahci May 27 03:15:51.824711 kernel: scsi host1: ahci May 27 03:15:51.824846 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:15:51.827103 kernel: scsi host2: ahci May 27 03:15:51.828529 kernel: scsi host3: ahci May 27 03:15:51.830862 kernel: scsi host4: ahci May 27 03:15:51.831034 kernel: scsi host5: ahci May 27 03:15:51.831175 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 27 03:15:51.832633 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 27 03:15:51.834533 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 27 03:15:51.834560 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 27 03:15:51.836860 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 27 03:15:51.836881 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 27 03:15:51.859273 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 03:15:51.859637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:51.873703 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 03:15:51.882070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:15:51.890010 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 03:15:51.890102 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 03:15:51.893791 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:15:51.916405 disk-uuid[632]: Primary Header is updated. May 27 03:15:51.916405 disk-uuid[632]: Secondary Entries is updated. May 27 03:15:51.916405 disk-uuid[632]: Secondary Header is updated. May 27 03:15:51.920299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:51.925509 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:52.145525 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:15:52.145589 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:15:52.146531 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:15:52.147531 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:15:52.147556 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:15:52.148526 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 27 03:15:52.149755 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 27 03:15:52.149768 kernel: ata3.00: applying bridge limits May 27 03:15:52.150918 kernel: ata3.00: configured for UDMA/100 May 27 03:15:52.151514 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 27 03:15:52.192540 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 27 03:15:52.192816 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 27 03:15:52.205518 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 27 03:15:52.609149 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:15:52.609801 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:15:52.612590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:15:52.612821 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:15:52.617997 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:15:52.646775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:15:52.928335 disk-uuid[633]: The operation has completed successfully. May 27 03:15:52.929708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 03:15:52.966780 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:15:52.966899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:15:52.991577 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:15:53.018590 sh[664]: Success May 27 03:15:53.038293 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:15:53.038351 kernel: device-mapper: uevent: version 1.0.3 May 27 03:15:53.038363 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:15:53.047511 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:15:53.077127 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:15:53.081191 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:15:53.094579 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:15:53.101522 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:15:53.104391 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (676) May 27 03:15:53.104412 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:15:53.104429 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:53.105925 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:15:53.110212 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:15:53.112341 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:15:53.114736 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:15:53.117349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:15:53.120579 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:15:53.150530 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (708) May 27 03:15:53.150585 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:53.153068 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:53.153110 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:15:53.160518 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:53.161500 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:15:53.164643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:15:53.245226 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:15:53.250633 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:15:53.257716 ignition[758]: Ignition 2.21.0 May 27 03:15:53.258019 ignition[758]: Stage: fetch-offline May 27 03:15:53.258245 ignition[758]: no configs at "/usr/lib/ignition/base.d" May 27 03:15:53.258259 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:53.258352 ignition[758]: parsed url from cmdline: "" May 27 03:15:53.258356 ignition[758]: no config URL provided May 27 03:15:53.258364 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:15:53.258373 ignition[758]: no config at "/usr/lib/ignition/user.ign" May 27 03:15:53.258412 ignition[758]: op(1): [started] loading QEMU firmware config module May 27 03:15:53.258418 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 03:15:53.266602 ignition[758]: op(1): [finished] loading QEMU firmware config module May 27 03:15:53.292351 systemd-networkd[850]: lo: Link UP May 27 03:15:53.292360 systemd-networkd[850]: lo: Gained carrier May 27 03:15:53.293900 systemd-networkd[850]: Enumeration completed May 27 03:15:53.294308 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:53.294313 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:15:53.294571 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:15:53.294828 systemd-networkd[850]: eth0: Link UP May 27 03:15:53.294833 systemd-networkd[850]: eth0: Gained carrier May 27 03:15:53.294842 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:53.296940 systemd[1]: Reached target network.target - Network. May 27 03:15:53.321324 ignition[758]: parsing config with SHA512: 3a2e00914128243921f112108fc849abd32b21399832b552abf754a186a56d6bc0b96ad6b62cb67d44aa1fc02a0e1fc73cc7de602ec07a22e42f0eaa38ffd9b1 May 27 03:15:53.325517 unknown[758]: fetched base config from "system" May 27 03:15:53.325529 unknown[758]: fetched user config from "qemu" May 27 03:15:53.325936 ignition[758]: fetch-offline: fetch-offline passed May 27 03:15:53.325989 ignition[758]: Ignition finished successfully May 27 03:15:53.328731 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:15:53.329195 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:15:53.332036 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 03:15:53.333184 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:15:53.368825 ignition[859]: Ignition 2.21.0 May 27 03:15:53.368841 ignition[859]: Stage: kargs May 27 03:15:53.369008 ignition[859]: no configs at "/usr/lib/ignition/base.d" May 27 03:15:53.369021 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:53.369940 ignition[859]: kargs: kargs passed May 27 03:15:53.370003 ignition[859]: Ignition finished successfully May 27 03:15:53.377654 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:15:53.378782 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:15:53.407077 ignition[867]: Ignition 2.21.0 May 27 03:15:53.407092 ignition[867]: Stage: disks May 27 03:15:53.407216 ignition[867]: no configs at "/usr/lib/ignition/base.d" May 27 03:15:53.407227 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:53.407926 ignition[867]: disks: disks passed May 27 03:15:53.411016 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:15:53.407970 ignition[867]: Ignition finished successfully May 27 03:15:53.411578 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:15:53.413249 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:15:53.417137 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:15:53.417387 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:15:53.420531 systemd[1]: Reached target basic.target - Basic System. May 27 03:15:53.424335 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:15:53.465850 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:15:53.474243 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:15:53.477985 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:15:53.587537 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:15:53.588651 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:15:53.589348 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:15:53.592766 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:15:53.595274 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:15:53.595624 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:15:53.595671 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:15:53.595712 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:15:53.613014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:15:53.618815 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (886) May 27 03:15:53.618849 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:53.618864 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:53.618877 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:15:53.614936 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:15:53.623944 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:15:53.654537 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:15:53.658910 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory May 27 03:15:53.663906 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:15:53.667967 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:15:53.758830 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:15:53.760695 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:15:53.763429 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:15:53.783517 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:53.797649 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:15:53.814646 ignition[1001]: INFO : Ignition 2.21.0 May 27 03:15:53.814646 ignition[1001]: INFO : Stage: mount May 27 03:15:53.817138 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:15:53.817138 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:53.817138 ignition[1001]: INFO : mount: mount passed May 27 03:15:53.821537 ignition[1001]: INFO : Ignition finished successfully May 27 03:15:53.820155 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:15:53.822705 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:15:54.102195 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:15:54.104010 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:15:54.134299 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1013) May 27 03:15:54.134357 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:15:54.134382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:15:54.135197 kernel: BTRFS info (device vda6): using free-space-tree May 27 03:15:54.140226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:15:54.186159 ignition[1030]: INFO : Ignition 2.21.0 May 27 03:15:54.186159 ignition[1030]: INFO : Stage: files May 27 03:15:54.187958 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:15:54.187958 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:54.190347 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping May 27 03:15:54.191527 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:15:54.191527 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:15:54.194570 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:15:54.194570 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:15:54.194570 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:15:54.193857 unknown[1030]: wrote ssh authorized keys file for user: core May 27 03:15:54.199872 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:15:54.199872 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 03:15:54.264692 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:15:54.410734 systemd-networkd[850]: eth0: Gained IPv6LL May 27 03:15:54.413219 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 03:15:54.413219 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:15:54.418528 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:15:54.436017 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:15:54.436017 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:15:54.436017 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:15:54.436017 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:15:54.436017 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 03:15:55.124884 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 03:15:55.480815 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 03:15:55.480815 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 03:15:55.484816 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:15:55.648474 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:15:55.648474 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 03:15:55.648474 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 03:15:55.654112 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:15:55.654112 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 03:15:55.654112 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 03:15:55.654112 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 03:15:55.669416 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:15:55.673313 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 03:15:55.675143 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 03:15:55.675143 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 03:15:55.675143 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:15:55.675143 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:15:55.675143 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:15:55.675143 ignition[1030]: INFO : files: files passed May 27 03:15:55.675143 ignition[1030]: INFO : Ignition finished successfully May 27 03:15:55.681080 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:15:55.687391 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:15:55.703173 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:15:55.707329 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:15:55.707461 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:15:55.713678 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory May 27 03:15:55.716273 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:15:55.716273 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:15:55.719838 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:15:55.719985 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:15:55.721732 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:15:55.722764 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:15:55.791569 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:15:55.791716 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:15:55.792965 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:15:55.801975 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:15:55.804184 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:15:55.805270 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:15:55.836170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:15:55.838403 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:15:55.861350 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:15:55.861634 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:15:55.865954 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:15:55.867194 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:15:55.867372 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:15:55.869248 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:15:55.869675 systemd[1]: Stopped target basic.target - Basic System. May 27 03:15:55.870003 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:15:55.870351 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:15:55.873392 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:15:55.873889 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:15:55.874227 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:15:55.874577 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:15:55.875097 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:15:55.875425 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:15:55.875932 systemd[1]: Stopped target swap.target - Swaps. May 27 03:15:55.876223 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:15:55.876385 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:15:55.898669 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:15:55.898880 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:15:55.899165 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:15:55.902980 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:15:55.903277 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:15:55.903445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:15:55.906951 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:15:55.907120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:15:55.909665 systemd[1]: Stopped target paths.target - Path Units. May 27 03:15:55.910038 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:15:55.916683 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:15:55.918173 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:15:55.920698 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:15:55.921735 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:15:55.921865 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:15:55.923537 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:15:55.923662 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:15:55.924099 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:15:55.924263 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:15:55.927573 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:15:55.927742 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:15:55.944012 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:15:55.945849 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:15:55.946015 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:15:55.948905 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:15:55.952828 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:15:55.953014 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:15:55.955249 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:15:55.955363 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:15:55.962862 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:15:55.973797 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:15:55.988206 ignition[1086]: INFO : Ignition 2.21.0 May 27 03:15:55.988206 ignition[1086]: INFO : Stage: umount May 27 03:15:56.000174 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:15:56.000174 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 03:15:56.000174 ignition[1086]: INFO : umount: umount passed May 27 03:15:56.000174 ignition[1086]: INFO : Ignition finished successfully May 27 03:15:55.992175 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:15:55.992338 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:15:55.998792 systemd[1]: Stopped target network.target - Network. May 27 03:15:56.000160 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:15:56.000216 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:15:56.002334 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:15:56.002396 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:15:56.003562 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:15:56.003616 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:15:56.005202 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:15:56.005249 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:15:56.005646 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:15:56.006147 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:15:56.007628 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:15:56.013367 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:15:56.013507 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:15:56.021218 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:15:56.021553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:15:56.021614 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:15:56.027054 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:15:56.027461 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:15:56.027660 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:15:56.031054 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:15:56.031848 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:15:56.033235 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:15:56.033293 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:15:56.034703 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:15:56.038988 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:15:56.039096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:15:56.041737 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:15:56.041786 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:15:56.046286 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:15:56.046341 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:15:56.047436 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:15:56.051649 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:15:56.075747 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:15:56.075886 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:15:56.093394 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:15:56.093612 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:15:56.094841 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:15:56.094890 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:15:56.097290 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:15:56.097324 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:15:56.099485 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:15:56.099557 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:15:56.104033 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:15:56.104095 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:15:56.107778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:15:56.107833 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:15:56.112901 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:15:56.113538 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:15:56.113597 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:15:56.118181 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:15:56.118242 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:15:56.132010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:15:56.132101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:56.144502 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:15:56.144637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:15:56.407711 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:15:56.407871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:15:56.408299 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:15:56.411098 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:15:56.411160 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:15:56.413148 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:15:56.439737 systemd[1]: Switching root. May 27 03:15:56.478329 systemd-journald[219]: Journal stopped May 27 03:15:57.776999 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). May 27 03:15:57.777097 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:15:57.777123 kernel: SELinux: policy capability open_perms=1 May 27 03:15:57.777150 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:15:57.777167 kernel: SELinux: policy capability always_check_network=0 May 27 03:15:57.777184 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:15:57.777251 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:15:57.777279 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:15:57.777296 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:15:57.777321 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:15:57.777338 kernel: audit: type=1403 audit(1748315756.902:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:15:57.777360 systemd[1]: Successfully loaded SELinux policy in 54.557ms. May 27 03:15:57.777382 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.350ms. May 27 03:15:57.777402 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:15:57.777421 systemd[1]: Detected virtualization kvm. May 27 03:15:57.777442 systemd[1]: Detected architecture x86-64. May 27 03:15:57.777459 systemd[1]: Detected first boot. May 27 03:15:57.777477 systemd[1]: Initializing machine ID from VM UUID. May 27 03:15:57.777561 zram_generator::config[1131]: No configuration found. May 27 03:15:57.777600 kernel: Guest personality initialized and is inactive May 27 03:15:57.777619 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:15:57.777644 kernel: Initialized host personality May 27 03:15:57.777661 kernel: NET: Registered PF_VSOCK protocol family May 27 03:15:57.777679 systemd[1]: Populated /etc with preset unit settings. May 27 03:15:57.777699 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:15:57.777725 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:15:57.777743 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:15:57.777761 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:15:57.777783 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:15:57.777802 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:15:57.777821 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:15:57.777839 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:15:57.777858 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:15:57.777876 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:15:57.777895 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:15:57.777912 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:15:57.777929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:15:57.777952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:15:57.777970 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:15:57.777988 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:15:57.778006 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:15:57.778025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:15:57.778043 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:15:57.778061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:15:57.778085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:15:57.778103 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:15:57.778121 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:15:57.778138 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:15:57.778156 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:15:57.778174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:15:57.778198 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:15:57.778217 systemd[1]: Reached target slices.target - Slice Units. May 27 03:15:57.778235 systemd[1]: Reached target swap.target - Swaps. May 27 03:15:57.778253 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:15:57.778274 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:15:57.778292 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:15:57.778310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:15:57.778328 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:15:57.778346 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:15:57.778364 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:15:57.778381 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:15:57.778399 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:15:57.778417 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:15:57.778438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:57.778456 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:15:57.778477 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:15:57.778514 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:15:57.778534 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:15:57.778553 systemd[1]: Reached target machines.target - Containers. May 27 03:15:57.778571 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:15:57.778601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:15:57.778641 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:15:57.778660 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:15:57.778678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:15:57.778696 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:15:57.778714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:15:57.778732 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:15:57.778749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:15:57.778767 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:15:57.778789 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:15:57.778807 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:15:57.778825 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:15:57.778843 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:15:57.778862 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:15:57.778881 kernel: fuse: init (API version 7.41) May 27 03:15:57.778898 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:15:57.778919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:15:57.778937 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:15:57.778959 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:15:57.778977 kernel: ACPI: bus type drm_connector registered May 27 03:15:57.778994 kernel: loop: module loaded May 27 03:15:57.779011 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:15:57.779030 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:15:57.779051 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:15:57.779069 systemd[1]: Stopped verity-setup.service. May 27 03:15:57.779087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:57.779105 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:15:57.779123 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:15:57.779140 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:15:57.779158 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:15:57.779203 systemd-journald[1206]: Collecting audit messages is disabled. May 27 03:15:57.779242 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:15:57.779262 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:15:57.779284 systemd-journald[1206]: Journal started May 27 03:15:57.779319 systemd-journald[1206]: Runtime Journal (/run/log/journal/2204d351daba4bc4bcaebee9d749cd55) is 6M, max 48.5M, 42.4M free. May 27 03:15:57.451974 systemd[1]: Queued start job for default target multi-user.target. May 27 03:15:57.472766 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 03:15:57.473270 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:15:57.783526 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:15:57.786131 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:15:57.787930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:15:57.789714 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:15:57.790039 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:15:57.791965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:15:57.792301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:15:57.793917 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:15:57.794222 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:15:57.795907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:15:57.796220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:15:57.798025 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:15:57.798339 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:15:57.800314 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:15:57.800642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:15:57.802380 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:15:57.804207 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:15:57.806067 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:15:57.807976 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:15:57.829729 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:15:57.850855 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:15:57.853735 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:15:57.855273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:15:57.855311 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:15:57.857773 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:15:57.862475 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:15:57.864034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:15:57.866337 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:15:57.870049 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:15:57.871542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:15:57.874600 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:15:57.875997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:15:57.877682 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:15:57.878660 systemd-journald[1206]: Time spent on flushing to /var/log/journal/2204d351daba4bc4bcaebee9d749cd55 is 31.699ms for 1058 entries. May 27 03:15:57.878660 systemd-journald[1206]: System Journal (/var/log/journal/2204d351daba4bc4bcaebee9d749cd55) is 8M, max 195.6M, 187.6M free. May 27 03:15:58.020283 systemd-journald[1206]: Received client request to flush runtime journal. May 27 03:15:58.020319 kernel: loop0: detected capacity change from 0 to 146240 May 27 03:15:58.020340 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:15:57.881066 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:15:57.907681 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:15:57.910693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:15:57.912381 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:15:57.914797 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:15:57.933330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:15:57.993007 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:15:57.994609 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:15:57.999206 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:15:58.015668 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:15:58.021688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:15:58.025117 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:15:58.040519 kernel: loop1: detected capacity change from 0 to 113872 May 27 03:15:58.051157 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:15:58.061922 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 27 03:15:58.061974 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 27 03:15:58.069619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:15:58.077606 kernel: loop2: detected capacity change from 0 to 224512 May 27 03:15:58.103833 kernel: loop3: detected capacity change from 0 to 146240 May 27 03:15:58.120530 kernel: loop4: detected capacity change from 0 to 113872 May 27 03:15:58.129613 kernel: loop5: detected capacity change from 0 to 224512 May 27 03:15:58.139298 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 03:15:58.140405 (sd-merge)[1273]: Merged extensions into '/usr'. May 27 03:15:58.145668 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:15:58.145683 systemd[1]: Reloading... May 27 03:15:58.210558 zram_generator::config[1299]: No configuration found. May 27 03:15:58.430916 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:15:58.508126 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:15:58.525440 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:15:58.525593 systemd[1]: Reloading finished in 379 ms. May 27 03:15:58.590852 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:15:58.592567 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:15:58.616422 systemd[1]: Starting ensure-sysext.service... May 27 03:15:58.618529 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:15:58.667739 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... May 27 03:15:58.667758 systemd[1]: Reloading... May 27 03:15:58.687394 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:15:58.687849 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:15:58.688142 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:15:58.688384 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:15:58.689270 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:15:58.689769 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 03:15:58.689885 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. May 27 03:15:58.694407 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:15:58.694419 systemd-tmpfiles[1337]: Skipping /boot May 27 03:15:58.713694 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:15:58.714669 systemd-tmpfiles[1337]: Skipping /boot May 27 03:15:58.760511 zram_generator::config[1370]: No configuration found. May 27 03:15:58.861314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:15:58.948854 systemd[1]: Reloading finished in 280 ms. May 27 03:15:58.969950 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:15:58.995581 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:15:59.005107 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:15:59.008039 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:15:59.011081 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:15:59.016701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:15:59.020931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:15:59.024684 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:15:59.031413 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:59.031617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:15:59.036156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:15:59.050979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:15:59.055771 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:15:59.057483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:15:59.057663 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:15:59.064830 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:15:59.066078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:59.067479 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:15:59.070053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:15:59.070312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:15:59.072129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:15:59.072703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:15:59.083567 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:15:59.083847 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:15:59.087773 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:15:59.097438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:59.097704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:15:59.098141 systemd-udevd[1407]: Using default interface naming scheme 'v255'. May 27 03:15:59.100452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:15:59.105412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:15:59.113090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:15:59.126505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:15:59.127814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:15:59.127929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:15:59.129258 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:15:59.129770 augenrules[1440]: No rules May 27 03:15:59.130459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:15:59.131858 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:15:59.132111 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:15:59.133870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:15:59.136105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:15:59.140063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:15:59.141950 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:15:59.142156 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:15:59.144164 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:15:59.145968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:15:59.146176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:15:59.148475 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:15:59.148706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:15:59.154743 systemd[1]: Finished ensure-sysext.service. May 27 03:15:59.157596 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:15:59.160781 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:15:59.171623 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:15:59.172784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:15:59.172849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:15:59.175234 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:15:59.176611 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:15:59.263998 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:15:59.315599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 03:15:59.318407 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:15:59.330657 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:15:59.336554 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:15:59.343565 kernel: ACPI: button: Power Button [PWRF] May 27 03:15:59.351304 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:15:59.372784 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 27 03:15:59.373118 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:15:59.375529 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:15:59.382352 systemd-networkd[1481]: lo: Link UP May 27 03:15:59.382366 systemd-networkd[1481]: lo: Gained carrier May 27 03:15:59.384447 systemd-networkd[1481]: Enumeration completed May 27 03:15:59.384615 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:15:59.385518 systemd-networkd[1481]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:59.385532 systemd-networkd[1481]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:15:59.388040 systemd-networkd[1481]: eth0: Link UP May 27 03:15:59.388197 systemd-networkd[1481]: eth0: Gained carrier May 27 03:15:59.388222 systemd-networkd[1481]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:15:59.388743 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:15:59.395430 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:15:59.399582 systemd-networkd[1481]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 03:15:59.441200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:59.541823 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:15:59.551698 systemd-resolved[1406]: Positive Trust Anchors: May 27 03:15:59.551721 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:15:59.551752 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:15:59.554960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:15:59.555248 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:15:59.559793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:15:59.603147 systemd-resolved[1406]: Defaulting to hostname 'linux'. May 27 03:15:59.603985 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:16:00.239377 systemd-resolved[1406]: Clock change detected. Flushing caches. May 27 03:16:00.239526 systemd-timesyncd[1484]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 03:16:00.239794 systemd-timesyncd[1484]: Initial clock synchronization to Tue 2025-05-27 03:16:00.239296 UTC. May 27 03:16:00.240394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:16:00.242048 systemd[1]: Reached target network.target - Network. May 27 03:16:00.243277 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:16:00.244735 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:16:00.298475 kernel: kvm_amd: TSC scaling supported May 27 03:16:00.298547 kernel: kvm_amd: Nested Virtualization enabled May 27 03:16:00.298560 kernel: kvm_amd: Nested Paging enabled May 27 03:16:00.299159 kernel: kvm_amd: LBR virtualization supported May 27 03:16:00.300519 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 27 03:16:00.300561 kernel: kvm_amd: Virtual GIF supported May 27 03:16:00.309315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:16:00.311436 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:16:00.313306 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:16:00.315112 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:16:00.316714 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:16:00.318614 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:16:00.320107 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:16:00.321677 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:16:00.323917 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:16:00.323964 systemd[1]: Reached target paths.target - Path Units. May 27 03:16:00.325091 systemd[1]: Reached target timers.target - Timer Units. May 27 03:16:00.327742 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:16:00.335060 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:16:00.339912 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:16:00.341711 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:16:00.343151 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:16:00.345128 kernel: EDAC MC: Ver: 3.0.0 May 27 03:16:00.346618 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:16:00.349349 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:16:00.351353 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:16:00.353227 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:16:00.354227 systemd[1]: Reached target basic.target - Basic System. May 27 03:16:00.355259 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:16:00.355294 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:16:00.356397 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:16:00.358820 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:16:00.370498 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:16:00.378027 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:16:00.380355 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:16:00.381498 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:16:00.383070 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:16:00.385997 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:16:00.386200 jq[1537]: false May 27 03:16:00.388169 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:16:00.393291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:16:00.395854 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:16:00.398815 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing passwd entry cache May 27 03:16:00.398829 oslogin_cache_refresh[1539]: Refreshing passwd entry cache May 27 03:16:00.402109 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:16:00.404091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:16:00.404589 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:16:00.406838 extend-filesystems[1538]: Found loop3 May 27 03:16:00.406838 extend-filesystems[1538]: Found loop4 May 27 03:16:00.406838 extend-filesystems[1538]: Found loop5 May 27 03:16:00.406838 extend-filesystems[1538]: Found sr0 May 27 03:16:00.406838 extend-filesystems[1538]: Found vda May 27 03:16:00.406838 extend-filesystems[1538]: Found vda1 May 27 03:16:00.406777 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:16:00.426153 extend-filesystems[1538]: Found vda2 May 27 03:16:00.426153 extend-filesystems[1538]: Found vda3 May 27 03:16:00.426153 extend-filesystems[1538]: Found usr May 27 03:16:00.426153 extend-filesystems[1538]: Found vda4 May 27 03:16:00.426153 extend-filesystems[1538]: Found vda6 May 27 03:16:00.426153 extend-filesystems[1538]: Found vda7 May 27 03:16:00.426153 extend-filesystems[1538]: Found vda9 May 27 03:16:00.426153 extend-filesystems[1538]: Checking size of /dev/vda9 May 27 03:16:00.408635 oslogin_cache_refresh[1539]: Failure getting users, quitting May 27 03:16:00.435734 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting users, quitting May 27 03:16:00.435734 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:16:00.435734 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing group entry cache May 27 03:16:00.435734 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting groups, quitting May 27 03:16:00.435734 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:16:00.409911 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:16:00.408654 oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:16:00.416681 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:16:00.408702 oslogin_cache_refresh[1539]: Refreshing group entry cache May 27 03:16:00.424007 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:16:00.417702 oslogin_cache_refresh[1539]: Failure getting groups, quitting May 27 03:16:00.437907 jq[1553]: true May 27 03:16:00.429446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:16:00.439273 extend-filesystems[1538]: Resized partition /dev/vda9 May 27 03:16:00.417717 oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:16:00.429878 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:16:00.431214 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:16:00.433347 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:16:00.433660 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:16:00.438726 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:16:00.439023 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:16:00.459776 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:16:00.470851 jq[1562]: true May 27 03:16:00.471914 extend-filesystems[1574]: resize2fs 1.47.2 (1-Jan-2025) May 27 03:16:00.478831 update_engine[1549]: I20250527 03:16:00.478749 1549 main.cc:92] Flatcar Update Engine starting May 27 03:16:00.496627 systemd-logind[1547]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:16:00.496653 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:16:00.501144 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 03:16:00.503311 systemd-logind[1547]: New seat seat0. May 27 03:16:00.504366 tar[1561]: linux-amd64/LICENSE May 27 03:16:00.504772 tar[1561]: linux-amd64/helm May 27 03:16:00.506750 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:16:00.532343 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:16:00.558027 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:16:00.561514 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:16:00.588261 dbus-daemon[1535]: [system] SELinux support is enabled May 27 03:16:00.588444 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:16:00.600902 update_engine[1549]: I20250527 03:16:00.592319 1549 update_check_scheduler.cc:74] Next update check in 7m18s May 27 03:16:00.597266 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:16:00.597584 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:16:00.600555 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:16:00.601509 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:16:00.601668 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 03:16:00.604612 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:16:00.606074 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:16:00.606133 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:16:00.607989 systemd[1]: Started update-engine.service - Update Engine. May 27 03:16:00.617162 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:16:00.680588 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:16:00.686865 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:16:00.690007 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:16:00.691610 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:16:00.790110 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 03:16:00.816226 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:16:01.585991 containerd[1563]: time="2025-05-27T03:16:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:16:01.587120 containerd[1563]: time="2025-05-27T03:16:01.586909417Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:16:01.599480 containerd[1563]: time="2025-05-27T03:16:01.599419888Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.962µs" May 27 03:16:01.599480 containerd[1563]: time="2025-05-27T03:16:01.599468860Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:16:01.599480 containerd[1563]: time="2025-05-27T03:16:01.599491473Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:16:01.599768 containerd[1563]: time="2025-05-27T03:16:01.599745910Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:16:01.599819 containerd[1563]: time="2025-05-27T03:16:01.599768131Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:16:01.599819 containerd[1563]: time="2025-05-27T03:16:01.599805932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:16:01.599937 containerd[1563]: time="2025-05-27T03:16:01.599901742Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:16:01.599937 containerd[1563]: time="2025-05-27T03:16:01.599925046Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:16:01.600305 containerd[1563]: time="2025-05-27T03:16:01.600274932Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:16:01.600305 containerd[1563]: time="2025-05-27T03:16:01.600294569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:16:01.600352 containerd[1563]: time="2025-05-27T03:16:01.600341076Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:16:01.600372 containerd[1563]: time="2025-05-27T03:16:01.600352688Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:16:01.600490 containerd[1563]: time="2025-05-27T03:16:01.600466150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:16:01.600738 containerd[1563]: time="2025-05-27T03:16:01.600711661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:16:01.600764 containerd[1563]: time="2025-05-27T03:16:01.600743501Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:16:01.600764 containerd[1563]: time="2025-05-27T03:16:01.600753159Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:16:01.600815 containerd[1563]: time="2025-05-27T03:16:01.600784177Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:16:01.601877 containerd[1563]: time="2025-05-27T03:16:01.601610907Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:16:01.601877 containerd[1563]: time="2025-05-27T03:16:01.601713870Z" level=info msg="metadata content store policy set" policy=shared May 27 03:16:01.622301 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 03:16:01.622301 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 03:16:01.622301 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 03:16:01.657423 extend-filesystems[1538]: Resized filesystem in /dev/vda9 May 27 03:16:01.625270 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:16:01.625538 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:16:01.856829 tar[1561]: linux-amd64/README.md May 27 03:16:01.863136 bash[1592]: Updated "/home/core/.ssh/authorized_keys" May 27 03:16:01.866427 containerd[1563]: time="2025-05-27T03:16:01.866351581Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866453331Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866477577Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866493397Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866506511Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866517892Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866532159Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866546797Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866569389Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866588876Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866600327Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866622549Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866775976Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866798699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:16:01.868299 containerd[1563]: time="2025-05-27T03:16:01.866815991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866826451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866836330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866854043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866865524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866877487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866888918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866898546Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866909337Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.866998203Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.867023961Z" level=info msg="Start snapshots syncer" May 27 03:16:01.868549 containerd[1563]: time="2025-05-27T03:16:01.867056092Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:16:01.868745 containerd[1563]: time="2025-05-27T03:16:01.867351005Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:16:01.868745 containerd[1563]: time="2025-05-27T03:16:01.867407771Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867497339Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867622504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867643754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867655907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867668039Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867685462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867698777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867715438Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867742038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867757447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867773277Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867816668Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867836054Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:16:01.868904 containerd[1563]: time="2025-05-27T03:16:01.867848017Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867862865Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867873084Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867884295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867903972Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867929099Z" level=info msg="runtime interface created" May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867936463Z" level=info msg="created NRI interface" May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867946251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.867959646Z" level=info msg="Connect containerd service" May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.868014499Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:16:01.869191 containerd[1563]: time="2025-05-27T03:16:01.868924856Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:16:01.869554 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:16:01.872442 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 03:16:01.876205 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:16:02.018415 systemd-networkd[1481]: eth0: Gained IPv6LL May 27 03:16:02.022240 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:16:02.024288 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:16:02.027172 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 03:16:02.030199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:02.033450 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:16:02.066135 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:16:02.072822 containerd[1563]: time="2025-05-27T03:16:02.072735474Z" level=info msg="Start subscribing containerd event" May 27 03:16:02.072909 containerd[1563]: time="2025-05-27T03:16:02.072819832Z" level=info msg="Start recovering state" May 27 03:16:02.073168 containerd[1563]: time="2025-05-27T03:16:02.072958633Z" level=info msg="Start event monitor" May 27 03:16:02.073168 containerd[1563]: time="2025-05-27T03:16:02.072984912Z" level=info msg="Start cni network conf syncer for default" May 27 03:16:02.073168 containerd[1563]: time="2025-05-27T03:16:02.072993678Z" level=info msg="Start streaming server" May 27 03:16:02.073168 containerd[1563]: time="2025-05-27T03:16:02.073012233Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:16:02.073168 containerd[1563]: time="2025-05-27T03:16:02.073021480Z" level=info msg="runtime interface starting up..." May 27 03:16:02.073168 containerd[1563]: time="2025-05-27T03:16:02.073034395Z" level=info msg="starting plugins..." May 27 03:16:02.073168 containerd[1563]: time="2025-05-27T03:16:02.073052228Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:16:02.073442 containerd[1563]: time="2025-05-27T03:16:02.073218450Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:16:02.073442 containerd[1563]: time="2025-05-27T03:16:02.073330320Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:16:02.073610 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:16:02.074509 containerd[1563]: time="2025-05-27T03:16:02.074446994Z" level=info msg="containerd successfully booted in 0.585807s" May 27 03:16:02.103999 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 03:16:02.104332 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 03:16:02.106916 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:16:03.294453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:03.297586 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:16:03.300242 systemd[1]: Startup finished in 2.976s (kernel) + 6.243s (initrd) + 5.819s (userspace) = 15.040s. May 27 03:16:03.310246 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:16:03.961398 kubelet[1665]: E0527 03:16:03.961326 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:16:03.966463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:16:03.966707 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:16:03.967222 systemd[1]: kubelet.service: Consumed 1.686s CPU time, 265.7M memory peak. May 27 03:16:04.584829 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:16:04.586145 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:36870.service - OpenSSH per-connection server daemon (10.0.0.1:36870). May 27 03:16:04.655288 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 36870 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:16:04.657384 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:04.665246 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:16:04.666518 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:16:04.675111 systemd-logind[1547]: New session 1 of user core. May 27 03:16:04.693162 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:16:04.696100 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:16:04.728782 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:16:04.731521 systemd-logind[1547]: New session c1 of user core. May 27 03:16:04.897255 systemd[1683]: Queued start job for default target default.target. May 27 03:16:04.917440 systemd[1683]: Created slice app.slice - User Application Slice. May 27 03:16:04.917472 systemd[1683]: Reached target paths.target - Paths. May 27 03:16:04.917524 systemd[1683]: Reached target timers.target - Timers. May 27 03:16:04.919268 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:16:04.931177 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:16:04.931324 systemd[1683]: Reached target sockets.target - Sockets. May 27 03:16:04.931371 systemd[1683]: Reached target basic.target - Basic System. May 27 03:16:04.931418 systemd[1683]: Reached target default.target - Main User Target. May 27 03:16:04.931456 systemd[1683]: Startup finished in 192ms. May 27 03:16:04.931877 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:16:04.933739 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:16:04.996949 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:36882.service - OpenSSH per-connection server daemon (10.0.0.1:36882). May 27 03:16:05.057595 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 36882 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:16:05.059187 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:05.063929 systemd-logind[1547]: New session 2 of user core. May 27 03:16:05.072260 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:16:05.127064 sshd[1696]: Connection closed by 10.0.0.1 port 36882 May 27 03:16:05.127386 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 27 03:16:05.144604 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:36882.service: Deactivated successfully. May 27 03:16:05.146859 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:16:05.147717 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. May 27 03:16:05.151482 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:36884.service - OpenSSH per-connection server daemon (10.0.0.1:36884). May 27 03:16:05.152305 systemd-logind[1547]: Removed session 2. May 27 03:16:05.202313 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 36884 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:16:05.204069 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:05.208981 systemd-logind[1547]: New session 3 of user core. May 27 03:16:05.217251 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:16:05.266829 sshd[1704]: Connection closed by 10.0.0.1 port 36884 May 27 03:16:05.267276 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 27 03:16:05.279986 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:36884.service: Deactivated successfully. May 27 03:16:05.281691 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:16:05.282544 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. May 27 03:16:05.285773 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:36896.service - OpenSSH per-connection server daemon (10.0.0.1:36896). May 27 03:16:05.286373 systemd-logind[1547]: Removed session 3. May 27 03:16:05.349984 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 36896 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:16:05.351723 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:05.356437 systemd-logind[1547]: New session 4 of user core. May 27 03:16:05.366349 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:16:05.422197 sshd[1712]: Connection closed by 10.0.0.1 port 36896 May 27 03:16:05.422463 sshd-session[1710]: pam_unix(sshd:session): session closed for user core May 27 03:16:05.436907 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:36896.service: Deactivated successfully. May 27 03:16:05.439282 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:16:05.440113 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. May 27 03:16:05.443844 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:36904.service - OpenSSH per-connection server daemon (10.0.0.1:36904). May 27 03:16:05.444708 systemd-logind[1547]: Removed session 4. May 27 03:16:05.500638 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 36904 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:16:05.502602 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:05.507655 systemd-logind[1547]: New session 5 of user core. May 27 03:16:05.521412 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:16:05.581598 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:16:05.581912 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:16:05.601983 sudo[1721]: pam_unix(sudo:session): session closed for user root May 27 03:16:05.603678 sshd[1720]: Connection closed by 10.0.0.1 port 36904 May 27 03:16:05.604189 sshd-session[1718]: pam_unix(sshd:session): session closed for user core May 27 03:16:05.623898 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:36904.service: Deactivated successfully. May 27 03:16:05.625811 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:16:05.626757 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. May 27 03:16:05.629921 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:36908.service - OpenSSH per-connection server daemon (10.0.0.1:36908). May 27 03:16:05.630758 systemd-logind[1547]: Removed session 5. May 27 03:16:05.682583 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 36908 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:16:05.684290 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:05.688753 systemd-logind[1547]: New session 6 of user core. May 27 03:16:05.698187 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:16:05.752355 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:16:05.752728 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:16:06.376992 sudo[1731]: pam_unix(sudo:session): session closed for user root May 27 03:16:06.384702 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:16:06.385169 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:16:06.396429 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:16:06.461431 augenrules[1753]: No rules May 27 03:16:06.463445 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:16:06.463725 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:16:06.465007 sudo[1730]: pam_unix(sudo:session): session closed for user root May 27 03:16:06.467071 sshd[1729]: Connection closed by 10.0.0.1 port 36908 May 27 03:16:06.467523 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 27 03:16:06.477784 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:36908.service: Deactivated successfully. May 27 03:16:06.479719 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:16:06.480595 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. May 27 03:16:06.485097 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:36912.service - OpenSSH per-connection server daemon (10.0.0.1:36912). May 27 03:16:06.485932 systemd-logind[1547]: Removed session 6. May 27 03:16:06.543518 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 36912 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:16:06.545570 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:16:06.552060 systemd-logind[1547]: New session 7 of user core. May 27 03:16:06.559277 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:16:06.616330 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:16:06.616762 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:16:07.540541 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:16:07.567608 (dockerd)[1785]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:16:08.014718 dockerd[1785]: time="2025-05-27T03:16:08.014575976Z" level=info msg="Starting up" May 27 03:16:08.016656 dockerd[1785]: time="2025-05-27T03:16:08.016609830Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:16:09.211510 dockerd[1785]: time="2025-05-27T03:16:09.211424755Z" level=info msg="Loading containers: start." May 27 03:16:09.225194 kernel: Initializing XFRM netlink socket May 27 03:16:09.624450 systemd-networkd[1481]: docker0: Link UP May 27 03:16:09.719608 dockerd[1785]: time="2025-05-27T03:16:09.719507461Z" level=info msg="Loading containers: done." May 27 03:16:09.803548 dockerd[1785]: time="2025-05-27T03:16:09.803473577Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:16:09.803745 dockerd[1785]: time="2025-05-27T03:16:09.803599263Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:16:09.803745 dockerd[1785]: time="2025-05-27T03:16:09.803732834Z" level=info msg="Initializing buildkit" May 27 03:16:09.966404 dockerd[1785]: time="2025-05-27T03:16:09.966236185Z" level=info msg="Completed buildkit initialization" May 27 03:16:09.973297 dockerd[1785]: time="2025-05-27T03:16:09.973227998Z" level=info msg="Daemon has completed initialization" May 27 03:16:09.973437 dockerd[1785]: time="2025-05-27T03:16:09.973348013Z" level=info msg="API listen on /run/docker.sock" May 27 03:16:09.973544 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:16:10.942390 containerd[1563]: time="2025-05-27T03:16:10.942343215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 03:16:11.876775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385944699.mount: Deactivated successfully. May 27 03:16:14.039514 containerd[1563]: time="2025-05-27T03:16:14.039451678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:14.040638 containerd[1563]: time="2025-05-27T03:16:14.040565006Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 27 03:16:14.042045 containerd[1563]: time="2025-05-27T03:16:14.041992714Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:14.044668 containerd[1563]: time="2025-05-27T03:16:14.044620281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:14.045483 containerd[1563]: time="2025-05-27T03:16:14.045451150Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 3.103056689s" May 27 03:16:14.045545 containerd[1563]: time="2025-05-27T03:16:14.045489552Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 27 03:16:14.046254 containerd[1563]: time="2025-05-27T03:16:14.046227465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 03:16:14.217106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:16:14.218640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:14.748439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:14.766485 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:16:15.499804 kubelet[2060]: E0527 03:16:15.499716 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:16:15.520341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:16:15.521425 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:16:15.521945 systemd[1]: kubelet.service: Consumed 295ms CPU time, 114M memory peak. May 27 03:16:17.874953 containerd[1563]: time="2025-05-27T03:16:17.874874175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:17.876799 containerd[1563]: time="2025-05-27T03:16:17.876754681Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 27 03:16:17.884958 containerd[1563]: time="2025-05-27T03:16:17.884895498Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:17.888533 containerd[1563]: time="2025-05-27T03:16:17.888454693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:17.903514 containerd[1563]: time="2025-05-27T03:16:17.901248296Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 3.854982278s" May 27 03:16:17.903514 containerd[1563]: time="2025-05-27T03:16:17.903150904Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 27 03:16:17.904262 containerd[1563]: time="2025-05-27T03:16:17.904194030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 03:16:20.143359 containerd[1563]: time="2025-05-27T03:16:20.143206237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:20.196165 containerd[1563]: time="2025-05-27T03:16:20.196102015Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 27 03:16:20.245830 containerd[1563]: time="2025-05-27T03:16:20.245731218Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:20.326534 containerd[1563]: time="2025-05-27T03:16:20.326466555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:20.327714 containerd[1563]: time="2025-05-27T03:16:20.327669501Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.423408436s" May 27 03:16:20.327779 containerd[1563]: time="2025-05-27T03:16:20.327722681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 27 03:16:20.328312 containerd[1563]: time="2025-05-27T03:16:20.328285597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 03:16:21.842108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803818825.mount: Deactivated successfully. May 27 03:16:22.949927 containerd[1563]: time="2025-05-27T03:16:22.949857951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:22.950699 containerd[1563]: time="2025-05-27T03:16:22.950666968Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 27 03:16:22.952075 containerd[1563]: time="2025-05-27T03:16:22.952039762Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:22.955487 containerd[1563]: time="2025-05-27T03:16:22.955441271Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.627118786s" May 27 03:16:22.955839 containerd[1563]: time="2025-05-27T03:16:22.955673577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:22.955988 containerd[1563]: time="2025-05-27T03:16:22.955806937Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 27 03:16:22.956897 containerd[1563]: time="2025-05-27T03:16:22.956619531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 03:16:25.175139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468004197.mount: Deactivated successfully. May 27 03:16:25.764025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 03:16:25.766153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:26.044968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:26.055572 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:16:26.261003 kubelet[2146]: E0527 03:16:26.260932 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:16:26.265864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:16:26.266184 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:16:26.266678 systemd[1]: kubelet.service: Consumed 301ms CPU time, 110.9M memory peak. May 27 03:16:27.174266 containerd[1563]: time="2025-05-27T03:16:27.174158472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:27.174936 containerd[1563]: time="2025-05-27T03:16:27.174878112Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 27 03:16:27.176492 containerd[1563]: time="2025-05-27T03:16:27.176410465Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:27.181615 containerd[1563]: time="2025-05-27T03:16:27.181555364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:27.182769 containerd[1563]: time="2025-05-27T03:16:27.182708236Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.226050022s" May 27 03:16:27.182769 containerd[1563]: time="2025-05-27T03:16:27.182761195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 03:16:27.183330 containerd[1563]: time="2025-05-27T03:16:27.183294626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:16:27.686850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911940542.mount: Deactivated successfully. May 27 03:16:27.695443 containerd[1563]: time="2025-05-27T03:16:27.695367425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:16:27.697274 containerd[1563]: time="2025-05-27T03:16:27.697206584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:16:27.698940 containerd[1563]: time="2025-05-27T03:16:27.698904448Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:16:27.701299 containerd[1563]: time="2025-05-27T03:16:27.701256339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:16:27.702049 containerd[1563]: time="2025-05-27T03:16:27.701994423Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 518.667356ms" May 27 03:16:27.702100 containerd[1563]: time="2025-05-27T03:16:27.702048935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:16:27.702710 containerd[1563]: time="2025-05-27T03:16:27.702639112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 03:16:28.308775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1220280279.mount: Deactivated successfully. May 27 03:16:30.672411 containerd[1563]: time="2025-05-27T03:16:30.672314083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:30.673604 containerd[1563]: time="2025-05-27T03:16:30.673576330Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 27 03:16:30.675688 containerd[1563]: time="2025-05-27T03:16:30.675651401Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:30.679249 containerd[1563]: time="2025-05-27T03:16:30.679190348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:30.680411 containerd[1563]: time="2025-05-27T03:16:30.680330707Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.977600223s" May 27 03:16:30.680411 containerd[1563]: time="2025-05-27T03:16:30.680379348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 03:16:33.143623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:33.143835 systemd[1]: kubelet.service: Consumed 301ms CPU time, 110.9M memory peak. May 27 03:16:33.146023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:33.168659 systemd[1]: Reload requested from client PID 2242 ('systemctl') (unit session-7.scope)... May 27 03:16:33.168676 systemd[1]: Reloading... May 27 03:16:33.292139 zram_generator::config[2288]: No configuration found. May 27 03:16:33.823220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:16:33.960761 systemd[1]: Reloading finished in 791 ms. May 27 03:16:34.032965 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:16:34.033110 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:16:34.033441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:34.033509 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.3M memory peak. May 27 03:16:34.035279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:34.253905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:34.269622 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:16:34.317962 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:34.317962 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:16:34.317962 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:34.318456 kubelet[2333]: I0527 03:16:34.318047 2333 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:16:34.635542 kubelet[2333]: I0527 03:16:34.635489 2333 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:16:34.635542 kubelet[2333]: I0527 03:16:34.635518 2333 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:16:34.635800 kubelet[2333]: I0527 03:16:34.635774 2333 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:16:34.690935 kubelet[2333]: E0527 03:16:34.690867 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:34.715432 kubelet[2333]: I0527 03:16:34.715364 2333 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:16:34.753212 kubelet[2333]: I0527 03:16:34.753165 2333 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:16:34.758671 kubelet[2333]: I0527 03:16:34.758634 2333 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:16:34.767076 kubelet[2333]: I0527 03:16:34.767000 2333 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:16:34.767291 kubelet[2333]: I0527 03:16:34.767044 2333 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:16:34.767410 kubelet[2333]: I0527 03:16:34.767298 2333 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:16:34.767410 kubelet[2333]: I0527 03:16:34.767311 2333 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:16:34.767493 kubelet[2333]: I0527 03:16:34.767469 2333 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:34.773466 kubelet[2333]: I0527 03:16:34.773431 2333 kubelet.go:446] "Attempting to sync node with API server" May 27 03:16:34.773524 kubelet[2333]: I0527 03:16:34.773469 2333 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:16:34.773524 kubelet[2333]: I0527 03:16:34.773499 2333 kubelet.go:352] "Adding apiserver pod source" May 27 03:16:34.773524 kubelet[2333]: I0527 03:16:34.773512 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:16:34.777817 kubelet[2333]: I0527 03:16:34.777782 2333 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:16:34.778237 kubelet[2333]: I0527 03:16:34.778214 2333 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:16:34.779361 kubelet[2333]: W0527 03:16:34.779330 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:16:34.779421 kubelet[2333]: W0527 03:16:34.779342 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:34.779459 kubelet[2333]: E0527 03:16:34.779411 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:34.779524 kubelet[2333]: W0527 03:16:34.779463 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:34.779585 kubelet[2333]: E0527 03:16:34.779554 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:34.783773 kubelet[2333]: I0527 03:16:34.783741 2333 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:16:34.783826 kubelet[2333]: I0527 03:16:34.783791 2333 server.go:1287] "Started kubelet" May 27 03:16:34.793895 kubelet[2333]: I0527 03:16:34.793691 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:16:34.794471 kubelet[2333]: I0527 03:16:34.794439 2333 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:16:34.794637 kubelet[2333]: I0527 03:16:34.794516 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:16:34.794637 kubelet[2333]: I0527 03:16:34.794536 2333 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:16:34.796347 kubelet[2333]: I0527 03:16:34.796327 2333 server.go:479] "Adding debug handlers to kubelet server" May 27 03:16:34.797480 kubelet[2333]: I0527 03:16:34.797446 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:16:34.798352 kubelet[2333]: E0527 03:16:34.798163 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:34.798533 kubelet[2333]: E0527 03:16:34.798490 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" May 27 03:16:34.800228 kubelet[2333]: E0527 03:16:34.798766 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184343f3cf4a6c23 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:16:34.783759395 +0000 UTC m=+0.508849692,LastTimestamp:2025-05-27 03:16:34.783759395 +0000 UTC m=+0.508849692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:16:34.801048 kubelet[2333]: I0527 03:16:34.801017 2333 factory.go:221] Registration of the systemd container factory successfully May 27 03:16:34.801181 kubelet[2333]: I0527 03:16:34.801152 2333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:16:34.801430 kubelet[2333]: I0527 03:16:34.801405 2333 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:16:34.801636 kubelet[2333]: I0527 03:16:34.801590 2333 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:16:34.801744 kubelet[2333]: I0527 03:16:34.801718 2333 reconciler.go:26] "Reconciler: start to sync state" May 27 03:16:34.802098 kubelet[2333]: W0527 03:16:34.802040 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:34.802485 kubelet[2333]: E0527 03:16:34.802446 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:34.803340 kubelet[2333]: I0527 03:16:34.803314 2333 factory.go:221] Registration of the containerd container factory successfully May 27 03:16:34.803604 kubelet[2333]: E0527 03:16:34.803572 2333 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:16:34.816235 kubelet[2333]: I0527 03:16:34.816109 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:16:34.817827 kubelet[2333]: I0527 03:16:34.817763 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:16:34.817827 kubelet[2333]: I0527 03:16:34.817804 2333 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:16:34.817886 kubelet[2333]: I0527 03:16:34.817835 2333 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:16:34.817886 kubelet[2333]: I0527 03:16:34.817844 2333 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:16:34.817941 kubelet[2333]: E0527 03:16:34.817901 2333 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:16:34.818055 kubelet[2333]: I0527 03:16:34.818042 2333 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:16:34.818368 kubelet[2333]: I0527 03:16:34.818111 2333 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:16:34.818368 kubelet[2333]: I0527 03:16:34.818130 2333 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:34.818484 kubelet[2333]: W0527 03:16:34.818453 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:34.818517 kubelet[2333]: E0527 03:16:34.818496 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:34.899118 kubelet[2333]: E0527 03:16:34.898910 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:34.918267 kubelet[2333]: E0527 03:16:34.918210 2333 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:16:34.999634 kubelet[2333]: E0527 03:16:34.999586 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:34.999982 kubelet[2333]: E0527 03:16:34.999946 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" May 27 03:16:35.100368 kubelet[2333]: E0527 03:16:35.100300 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.118757 kubelet[2333]: E0527 03:16:35.118709 2333 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:16:35.201114 kubelet[2333]: E0527 03:16:35.200966 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.302175 kubelet[2333]: E0527 03:16:35.302113 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.401105 kubelet[2333]: E0527 03:16:35.401032 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" May 27 03:16:35.403172 kubelet[2333]: E0527 03:16:35.403143 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.504035 kubelet[2333]: E0527 03:16:35.503884 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.519412 kubelet[2333]: E0527 03:16:35.519336 2333 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:16:35.604877 kubelet[2333]: E0527 03:16:35.604819 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.705594 kubelet[2333]: E0527 03:16:35.705516 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.722591 kubelet[2333]: W0527 03:16:35.722516 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:35.722695 kubelet[2333]: E0527 03:16:35.722585 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:35.755473 kubelet[2333]: W0527 03:16:35.755339 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:35.755473 kubelet[2333]: E0527 03:16:35.755396 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:35.805963 kubelet[2333]: E0527 03:16:35.805894 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.845884 kubelet[2333]: W0527 03:16:35.845811 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:35.846026 kubelet[2333]: E0527 03:16:35.845889 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:35.889311 kubelet[2333]: I0527 03:16:35.889246 2333 policy_none.go:49] "None policy: Start" May 27 03:16:35.889311 kubelet[2333]: I0527 03:16:35.889303 2333 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:16:35.889311 kubelet[2333]: I0527 03:16:35.889325 2333 state_mem.go:35] "Initializing new in-memory state store" May 27 03:16:35.905780 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:16:35.906311 kubelet[2333]: E0527 03:16:35.905978 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:35.921446 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:16:35.926049 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:16:35.955638 kubelet[2333]: I0527 03:16:35.954779 2333 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:16:35.955638 kubelet[2333]: I0527 03:16:35.955100 2333 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:16:35.955638 kubelet[2333]: I0527 03:16:35.955116 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:16:35.955638 kubelet[2333]: I0527 03:16:35.955516 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:16:35.957770 kubelet[2333]: E0527 03:16:35.957678 2333 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:16:35.957770 kubelet[2333]: E0527 03:16:35.957752 2333 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 03:16:36.058357 kubelet[2333]: I0527 03:16:36.058292 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:36.058686 kubelet[2333]: E0527 03:16:36.058655 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 27 03:16:36.201891 kubelet[2333]: E0527 03:16:36.201847 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" May 27 03:16:36.260906 kubelet[2333]: I0527 03:16:36.260844 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:36.261260 kubelet[2333]: E0527 03:16:36.261201 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 27 03:16:36.328487 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 03:16:36.335130 kubelet[2333]: W0527 03:16:36.335068 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:36.335130 kubelet[2333]: E0527 03:16:36.335126 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:36.356822 kubelet[2333]: E0527 03:16:36.356777 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:36.361245 systemd[1]: Created slice kubepods-burstable-podff046f30fb44c733999691073d217be6.slice - libcontainer container kubepods-burstable-podff046f30fb44c733999691073d217be6.slice. May 27 03:16:36.363198 kubelet[2333]: E0527 03:16:36.363144 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:36.382429 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 03:16:36.385125 kubelet[2333]: E0527 03:16:36.385064 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:36.411598 kubelet[2333]: I0527 03:16:36.411524 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:36.411598 kubelet[2333]: I0527 03:16:36.411587 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:36.411598 kubelet[2333]: I0527 03:16:36.411613 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff046f30fb44c733999691073d217be6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff046f30fb44c733999691073d217be6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:36.412293 kubelet[2333]: I0527 03:16:36.411633 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff046f30fb44c733999691073d217be6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff046f30fb44c733999691073d217be6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:36.412293 kubelet[2333]: I0527 03:16:36.411657 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff046f30fb44c733999691073d217be6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ff046f30fb44c733999691073d217be6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:36.412293 kubelet[2333]: I0527 03:16:36.411696 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:36.412293 kubelet[2333]: I0527 03:16:36.411748 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:16:36.412293 kubelet[2333]: I0527 03:16:36.411794 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:36.412463 kubelet[2333]: I0527 03:16:36.411823 2333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:36.657959 kubelet[2333]: E0527 03:16:36.657839 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:36.658609 containerd[1563]: time="2025-05-27T03:16:36.658570444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 03:16:36.662947 kubelet[2333]: I0527 03:16:36.662909 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:36.663461 kubelet[2333]: E0527 03:16:36.663407 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 27 03:16:36.663561 kubelet[2333]: E0527 03:16:36.663536 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:36.663867 containerd[1563]: time="2025-05-27T03:16:36.663832164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ff046f30fb44c733999691073d217be6,Namespace:kube-system,Attempt:0,}" May 27 03:16:36.686211 kubelet[2333]: E0527 03:16:36.686147 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:36.686928 containerd[1563]: time="2025-05-27T03:16:36.686882833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 03:16:36.845207 kubelet[2333]: E0527 03:16:36.845162 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:36.903262 kubelet[2333]: E0527 03:16:36.903131 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184343f3cf4a6c23 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 03:16:34.783759395 +0000 UTC m=+0.508849692,LastTimestamp:2025-05-27 03:16:34.783759395 +0000 UTC m=+0.508849692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 03:16:37.464929 kubelet[2333]: I0527 03:16:37.464893 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:37.465403 kubelet[2333]: E0527 03:16:37.465359 2333 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 27 03:16:37.583759 containerd[1563]: time="2025-05-27T03:16:37.583680308Z" level=info msg="connecting to shim 8a054b734bda5cd84e8e062f221f3709c7db502f63f2746b6e3c5918db50df6b" address="unix:///run/containerd/s/fdcfb827c78d8d7810d06b8ff096a96d9644ddafd146fb65140b848649a68356" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:37.588130 containerd[1563]: time="2025-05-27T03:16:37.588067005Z" level=info msg="connecting to shim a0b18c65343ac16695d9cf1f4cdfc4282f533e810ad53f875524c6aeca5419d5" address="unix:///run/containerd/s/075a55d35d3cc2c0a3cb5ce36e625dd3454187ec17b39c79ef2484dc9f6c2ae0" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:37.605107 containerd[1563]: time="2025-05-27T03:16:37.605006663Z" level=info msg="connecting to shim 5efd43dfc45d2bbb499cb7bd6b06bf3d555f62c30d25b280b6a74839cd59dc6a" address="unix:///run/containerd/s/eb1ecc7c8249aaee804710f40be28c7414bbe98239ddc41fd47ce4703c682a43" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:37.622238 systemd[1]: Started cri-containerd-8a054b734bda5cd84e8e062f221f3709c7db502f63f2746b6e3c5918db50df6b.scope - libcontainer container 8a054b734bda5cd84e8e062f221f3709c7db502f63f2746b6e3c5918db50df6b. May 27 03:16:37.626246 systemd[1]: Started cri-containerd-a0b18c65343ac16695d9cf1f4cdfc4282f533e810ad53f875524c6aeca5419d5.scope - libcontainer container a0b18c65343ac16695d9cf1f4cdfc4282f533e810ad53f875524c6aeca5419d5. May 27 03:16:37.633632 systemd[1]: Started cri-containerd-5efd43dfc45d2bbb499cb7bd6b06bf3d555f62c30d25b280b6a74839cd59dc6a.scope - libcontainer container 5efd43dfc45d2bbb499cb7bd6b06bf3d555f62c30d25b280b6a74839cd59dc6a. May 27 03:16:37.678453 containerd[1563]: time="2025-05-27T03:16:37.678228886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0b18c65343ac16695d9cf1f4cdfc4282f533e810ad53f875524c6aeca5419d5\"" May 27 03:16:37.679705 kubelet[2333]: E0527 03:16:37.679662 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:37.682625 containerd[1563]: time="2025-05-27T03:16:37.682584714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ff046f30fb44c733999691073d217be6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a054b734bda5cd84e8e062f221f3709c7db502f63f2746b6e3c5918db50df6b\"" May 27 03:16:37.682825 containerd[1563]: time="2025-05-27T03:16:37.682799855Z" level=info msg="CreateContainer within sandbox \"a0b18c65343ac16695d9cf1f4cdfc4282f533e810ad53f875524c6aeca5419d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:16:37.684748 kubelet[2333]: E0527 03:16:37.684708 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:37.687154 containerd[1563]: time="2025-05-27T03:16:37.687128892Z" level=info msg="CreateContainer within sandbox \"8a054b734bda5cd84e8e062f221f3709c7db502f63f2746b6e3c5918db50df6b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:16:37.690885 containerd[1563]: time="2025-05-27T03:16:37.690853965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5efd43dfc45d2bbb499cb7bd6b06bf3d555f62c30d25b280b6a74839cd59dc6a\"" May 27 03:16:37.691436 kubelet[2333]: E0527 03:16:37.691397 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:37.693181 containerd[1563]: time="2025-05-27T03:16:37.693151342Z" level=info msg="CreateContainer within sandbox \"5efd43dfc45d2bbb499cb7bd6b06bf3d555f62c30d25b280b6a74839cd59dc6a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:16:37.699057 containerd[1563]: time="2025-05-27T03:16:37.699005241Z" level=info msg="Container fe87be3528d39d658fdbaf8206d368675eebf93ad031867db7748a36d984d57e: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:37.708002 containerd[1563]: time="2025-05-27T03:16:37.707961503Z" level=info msg="Container e98b27903f36cd87c506356554cf1edb8a95f1e71acab9bc5315a683a1085fe9: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:37.708830 containerd[1563]: time="2025-05-27T03:16:37.708788071Z" level=info msg="CreateContainer within sandbox \"a0b18c65343ac16695d9cf1f4cdfc4282f533e810ad53f875524c6aeca5419d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe87be3528d39d658fdbaf8206d368675eebf93ad031867db7748a36d984d57e\"" May 27 03:16:37.709501 containerd[1563]: time="2025-05-27T03:16:37.709462139Z" level=info msg="StartContainer for \"fe87be3528d39d658fdbaf8206d368675eebf93ad031867db7748a36d984d57e\"" May 27 03:16:37.710536 containerd[1563]: time="2025-05-27T03:16:37.710502716Z" level=info msg="connecting to shim fe87be3528d39d658fdbaf8206d368675eebf93ad031867db7748a36d984d57e" address="unix:///run/containerd/s/075a55d35d3cc2c0a3cb5ce36e625dd3454187ec17b39c79ef2484dc9f6c2ae0" protocol=ttrpc version=3 May 27 03:16:37.713132 containerd[1563]: time="2025-05-27T03:16:37.712682459Z" level=info msg="Container 3b53c71bda0113955991d498137b04a8390534ee9a7db8bb59db681f06ed2a84: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:37.720390 kubelet[2333]: W0527 03:16:37.720291 2333 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 27 03:16:37.720626 kubelet[2333]: E0527 03:16:37.720507 2333 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" May 27 03:16:37.722155 containerd[1563]: time="2025-05-27T03:16:37.722121003Z" level=info msg="CreateContainer within sandbox \"8a054b734bda5cd84e8e062f221f3709c7db502f63f2746b6e3c5918db50df6b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e98b27903f36cd87c506356554cf1edb8a95f1e71acab9bc5315a683a1085fe9\"" May 27 03:16:37.724467 containerd[1563]: time="2025-05-27T03:16:37.723112164Z" level=info msg="StartContainer for \"e98b27903f36cd87c506356554cf1edb8a95f1e71acab9bc5315a683a1085fe9\"" May 27 03:16:37.724467 containerd[1563]: time="2025-05-27T03:16:37.724073430Z" level=info msg="connecting to shim e98b27903f36cd87c506356554cf1edb8a95f1e71acab9bc5315a683a1085fe9" address="unix:///run/containerd/s/fdcfb827c78d8d7810d06b8ff096a96d9644ddafd146fb65140b848649a68356" protocol=ttrpc version=3 May 27 03:16:37.724467 containerd[1563]: time="2025-05-27T03:16:37.724180144Z" level=info msg="CreateContainer within sandbox \"5efd43dfc45d2bbb499cb7bd6b06bf3d555f62c30d25b280b6a74839cd59dc6a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b53c71bda0113955991d498137b04a8390534ee9a7db8bb59db681f06ed2a84\"" May 27 03:16:37.725015 containerd[1563]: time="2025-05-27T03:16:37.724967468Z" level=info msg="StartContainer for \"3b53c71bda0113955991d498137b04a8390534ee9a7db8bb59db681f06ed2a84\"" May 27 03:16:37.726656 containerd[1563]: time="2025-05-27T03:16:37.726633930Z" level=info msg="connecting to shim 3b53c71bda0113955991d498137b04a8390534ee9a7db8bb59db681f06ed2a84" address="unix:///run/containerd/s/eb1ecc7c8249aaee804710f40be28c7414bbe98239ddc41fd47ce4703c682a43" protocol=ttrpc version=3 May 27 03:16:37.733629 systemd[1]: Started cri-containerd-fe87be3528d39d658fdbaf8206d368675eebf93ad031867db7748a36d984d57e.scope - libcontainer container fe87be3528d39d658fdbaf8206d368675eebf93ad031867db7748a36d984d57e. May 27 03:16:37.764236 systemd[1]: Started cri-containerd-3b53c71bda0113955991d498137b04a8390534ee9a7db8bb59db681f06ed2a84.scope - libcontainer container 3b53c71bda0113955991d498137b04a8390534ee9a7db8bb59db681f06ed2a84. May 27 03:16:37.765607 systemd[1]: Started cri-containerd-e98b27903f36cd87c506356554cf1edb8a95f1e71acab9bc5315a683a1085fe9.scope - libcontainer container e98b27903f36cd87c506356554cf1edb8a95f1e71acab9bc5315a683a1085fe9. May 27 03:16:37.803144 kubelet[2333]: E0527 03:16:37.802550 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="3.2s" May 27 03:16:37.816438 containerd[1563]: time="2025-05-27T03:16:37.816358717Z" level=info msg="StartContainer for \"fe87be3528d39d658fdbaf8206d368675eebf93ad031867db7748a36d984d57e\" returns successfully" May 27 03:16:37.834551 containerd[1563]: time="2025-05-27T03:16:37.834489980Z" level=info msg="StartContainer for \"e98b27903f36cd87c506356554cf1edb8a95f1e71acab9bc5315a683a1085fe9\" returns successfully" May 27 03:16:37.839908 kubelet[2333]: E0527 03:16:37.839883 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:37.840651 kubelet[2333]: E0527 03:16:37.840580 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:37.849045 containerd[1563]: time="2025-05-27T03:16:37.849014937Z" level=info msg="StartContainer for \"3b53c71bda0113955991d498137b04a8390534ee9a7db8bb59db681f06ed2a84\" returns successfully" May 27 03:16:37.850107 kubelet[2333]: E0527 03:16:37.850063 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:37.850572 kubelet[2333]: E0527 03:16:37.850496 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:38.853904 kubelet[2333]: E0527 03:16:38.853485 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:38.853904 kubelet[2333]: E0527 03:16:38.853551 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:38.853904 kubelet[2333]: E0527 03:16:38.853616 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:38.853904 kubelet[2333]: E0527 03:16:38.853671 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:38.853904 kubelet[2333]: E0527 03:16:38.853777 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:38.853904 kubelet[2333]: E0527 03:16:38.853848 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:39.067502 kubelet[2333]: I0527 03:16:39.067452 2333 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:39.077254 kubelet[2333]: I0527 03:16:39.077222 2333 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:16:39.077254 kubelet[2333]: E0527 03:16:39.077248 2333 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 27 03:16:39.087289 kubelet[2333]: E0527 03:16:39.087226 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.188489 kubelet[2333]: E0527 03:16:39.188339 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.288559 kubelet[2333]: E0527 03:16:39.288503 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.389348 kubelet[2333]: E0527 03:16:39.389294 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.490078 kubelet[2333]: E0527 03:16:39.489941 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.590657 kubelet[2333]: E0527 03:16:39.590600 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.691699 kubelet[2333]: E0527 03:16:39.691637 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.792099 kubelet[2333]: E0527 03:16:39.792050 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.855678 kubelet[2333]: E0527 03:16:39.855642 2333 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 03:16:39.856157 kubelet[2333]: E0527 03:16:39.855825 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:39.892858 kubelet[2333]: E0527 03:16:39.892811 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:39.993841 kubelet[2333]: E0527 03:16:39.993793 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.094502 kubelet[2333]: E0527 03:16:40.094376 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.195167 kubelet[2333]: E0527 03:16:40.195112 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.295549 kubelet[2333]: E0527 03:16:40.295501 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.396237 kubelet[2333]: E0527 03:16:40.396117 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.496821 kubelet[2333]: E0527 03:16:40.496750 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.597617 kubelet[2333]: E0527 03:16:40.597557 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.698862 kubelet[2333]: E0527 03:16:40.698642 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.704038 systemd[1]: Reload requested from client PID 2609 ('systemctl') (unit session-7.scope)... May 27 03:16:40.704055 systemd[1]: Reloading... May 27 03:16:40.798161 zram_generator::config[2655]: No configuration found. May 27 03:16:40.799385 kubelet[2333]: E0527 03:16:40.799328 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:40.886956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:16:40.899556 kubelet[2333]: E0527 03:16:40.899524 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:41.000097 kubelet[2333]: E0527 03:16:40.999945 2333 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:41.018686 systemd[1]: Reloading finished in 314 ms. May 27 03:16:41.050520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:41.076374 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:16:41.076689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:41.076742 systemd[1]: kubelet.service: Consumed 873ms CPU time, 131.4M memory peak. May 27 03:16:41.078634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:16:41.301411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:16:41.312561 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:16:41.369489 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:41.369489 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:16:41.369489 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:16:41.370237 kubelet[2697]: I0527 03:16:41.370183 2697 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:16:41.376886 kubelet[2697]: I0527 03:16:41.376837 2697 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 03:16:41.376886 kubelet[2697]: I0527 03:16:41.376869 2697 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:16:41.377240 kubelet[2697]: I0527 03:16:41.377211 2697 server.go:954] "Client rotation is on, will bootstrap in background" May 27 03:16:41.378365 kubelet[2697]: I0527 03:16:41.378338 2697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 03:16:41.380299 kubelet[2697]: I0527 03:16:41.380256 2697 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:16:41.385690 kubelet[2697]: I0527 03:16:41.385659 2697 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:16:41.390775 kubelet[2697]: I0527 03:16:41.390732 2697 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:16:41.391055 kubelet[2697]: I0527 03:16:41.391005 2697 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:16:41.391293 kubelet[2697]: I0527 03:16:41.391039 2697 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:16:41.391403 kubelet[2697]: I0527 03:16:41.391295 2697 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:16:41.391403 kubelet[2697]: I0527 03:16:41.391308 2697 container_manager_linux.go:304] "Creating device plugin manager" May 27 03:16:41.391403 kubelet[2697]: I0527 03:16:41.391377 2697 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:41.391608 kubelet[2697]: I0527 03:16:41.391572 2697 kubelet.go:446] "Attempting to sync node with API server" May 27 03:16:41.391653 kubelet[2697]: I0527 03:16:41.391611 2697 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:16:41.391653 kubelet[2697]: I0527 03:16:41.391636 2697 kubelet.go:352] "Adding apiserver pod source" May 27 03:16:41.391653 kubelet[2697]: I0527 03:16:41.391648 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:16:41.393380 kubelet[2697]: I0527 03:16:41.393351 2697 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:16:41.394034 kubelet[2697]: I0527 03:16:41.393915 2697 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 03:16:41.394677 kubelet[2697]: I0527 03:16:41.394575 2697 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:16:41.394677 kubelet[2697]: I0527 03:16:41.394615 2697 server.go:1287] "Started kubelet" May 27 03:16:41.397037 kubelet[2697]: I0527 03:16:41.396965 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:16:41.401284 kubelet[2697]: I0527 03:16:41.401072 2697 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:16:41.403328 kubelet[2697]: I0527 03:16:41.402998 2697 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:16:41.403328 kubelet[2697]: I0527 03:16:41.403148 2697 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:16:41.403328 kubelet[2697]: E0527 03:16:41.403313 2697 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 03:16:41.403891 kubelet[2697]: I0527 03:16:41.403871 2697 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:16:41.404076 kubelet[2697]: I0527 03:16:41.404045 2697 reconciler.go:26] "Reconciler: start to sync state" May 27 03:16:41.404959 kubelet[2697]: I0527 03:16:41.404939 2697 server.go:479] "Adding debug handlers to kubelet server" May 27 03:16:41.409076 kubelet[2697]: E0527 03:16:41.409046 2697 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:16:41.410295 kubelet[2697]: I0527 03:16:41.410254 2697 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:16:41.412373 kubelet[2697]: I0527 03:16:41.412318 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:16:41.412885 kubelet[2697]: I0527 03:16:41.412861 2697 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:16:41.413108 kubelet[2697]: I0527 03:16:41.413066 2697 factory.go:221] Registration of the containerd container factory successfully May 27 03:16:41.413108 kubelet[2697]: I0527 03:16:41.413104 2697 factory.go:221] Registration of the systemd container factory successfully May 27 03:16:41.417204 kubelet[2697]: I0527 03:16:41.417157 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 03:16:41.419577 kubelet[2697]: I0527 03:16:41.419212 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 03:16:41.419577 kubelet[2697]: I0527 03:16:41.419294 2697 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 03:16:41.419577 kubelet[2697]: I0527 03:16:41.419319 2697 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:16:41.419577 kubelet[2697]: I0527 03:16:41.419326 2697 kubelet.go:2382] "Starting kubelet main sync loop" May 27 03:16:41.419577 kubelet[2697]: E0527 03:16:41.419371 2697 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:16:41.519865 kubelet[2697]: E0527 03:16:41.519794 2697 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 03:16:41.553186 kubelet[2697]: I0527 03:16:41.553092 2697 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:16:41.553186 kubelet[2697]: I0527 03:16:41.553113 2697 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:16:41.553186 kubelet[2697]: I0527 03:16:41.553140 2697 state_mem.go:36] "Initialized new in-memory state store" May 27 03:16:41.553368 kubelet[2697]: I0527 03:16:41.553324 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:16:41.553368 kubelet[2697]: I0527 03:16:41.553339 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:16:41.553368 kubelet[2697]: I0527 03:16:41.553364 2697 policy_none.go:49] "None policy: Start" May 27 03:16:41.553463 kubelet[2697]: I0527 03:16:41.553376 2697 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:16:41.553463 kubelet[2697]: I0527 03:16:41.553388 2697 state_mem.go:35] "Initializing new in-memory state store" May 27 03:16:41.553556 kubelet[2697]: I0527 03:16:41.553536 2697 state_mem.go:75] "Updated machine memory state" May 27 03:16:41.559178 kubelet[2697]: I0527 03:16:41.559155 2697 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 03:16:41.559459 kubelet[2697]: I0527 03:16:41.559443 2697 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:16:41.559569 kubelet[2697]: I0527 03:16:41.559521 2697 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:16:41.559806 kubelet[2697]: I0527 03:16:41.559789 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:16:41.562757 kubelet[2697]: E0527 03:16:41.562722 2697 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:16:41.669976 kubelet[2697]: I0527 03:16:41.669797 2697 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 03:16:41.720934 kubelet[2697]: I0527 03:16:41.720897 2697 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:16:41.720934 kubelet[2697]: I0527 03:16:41.720945 2697 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 03:16:41.721177 kubelet[2697]: I0527 03:16:41.720947 2697 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 03:16:41.784541 kubelet[2697]: I0527 03:16:41.784497 2697 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 03:16:41.784726 kubelet[2697]: I0527 03:16:41.784596 2697 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 03:16:41.805662 kubelet[2697]: I0527 03:16:41.805505 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff046f30fb44c733999691073d217be6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff046f30fb44c733999691073d217be6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:41.805662 kubelet[2697]: I0527 03:16:41.805548 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff046f30fb44c733999691073d217be6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ff046f30fb44c733999691073d217be6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:41.805662 kubelet[2697]: I0527 03:16:41.805580 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff046f30fb44c733999691073d217be6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ff046f30fb44c733999691073d217be6\") " pod="kube-system/kube-apiserver-localhost" May 27 03:16:41.805662 kubelet[2697]: I0527 03:16:41.805608 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:41.805662 kubelet[2697]: I0527 03:16:41.805636 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:41.805965 kubelet[2697]: I0527 03:16:41.805659 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:41.805965 kubelet[2697]: I0527 03:16:41.805679 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 03:16:41.805965 kubelet[2697]: I0527 03:16:41.805703 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:41.805965 kubelet[2697]: I0527 03:16:41.805724 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 03:16:42.089394 kubelet[2697]: E0527 03:16:42.089277 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:42.094373 kubelet[2697]: E0527 03:16:42.094313 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:42.094751 kubelet[2697]: E0527 03:16:42.094675 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:42.393329 kubelet[2697]: I0527 03:16:42.393172 2697 apiserver.go:52] "Watching apiserver" May 27 03:16:42.404536 kubelet[2697]: I0527 03:16:42.404491 2697 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:16:42.432563 kubelet[2697]: I0527 03:16:42.432531 2697 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 03:16:42.433041 kubelet[2697]: E0527 03:16:42.432948 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:42.433041 kubelet[2697]: E0527 03:16:42.432948 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:42.440966 kubelet[2697]: E0527 03:16:42.440917 2697 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 03:16:42.443103 kubelet[2697]: E0527 03:16:42.441145 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:42.466546 kubelet[2697]: I0527 03:16:42.466412 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.46638669 podStartE2EDuration="1.46638669s" podCreationTimestamp="2025-05-27 03:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:42.455973954 +0000 UTC m=+1.138735961" watchObservedRunningTime="2025-05-27 03:16:42.46638669 +0000 UTC m=+1.149148697" May 27 03:16:42.476416 kubelet[2697]: I0527 03:16:42.476315 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.476290399 podStartE2EDuration="1.476290399s" podCreationTimestamp="2025-05-27 03:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:42.467269628 +0000 UTC m=+1.150031635" watchObservedRunningTime="2025-05-27 03:16:42.476290399 +0000 UTC m=+1.159052406" May 27 03:16:42.476709 kubelet[2697]: I0527 03:16:42.476607 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.476600117 podStartE2EDuration="1.476600117s" podCreationTimestamp="2025-05-27 03:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:42.47649092 +0000 UTC m=+1.159252927" watchObservedRunningTime="2025-05-27 03:16:42.476600117 +0000 UTC m=+1.159362125" May 27 03:16:43.433625 kubelet[2697]: E0527 03:16:43.433580 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:43.434164 kubelet[2697]: E0527 03:16:43.433668 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:45.817565 kubelet[2697]: E0527 03:16:45.817515 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:46.263122 update_engine[1549]: I20250527 03:16:46.262933 1549 update_attempter.cc:509] Updating boot flags... May 27 03:16:46.766763 kubelet[2697]: I0527 03:16:46.766723 2697 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:16:46.767158 containerd[1563]: time="2025-05-27T03:16:46.767116721Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:16:46.767519 kubelet[2697]: I0527 03:16:46.767296 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:16:47.504806 systemd[1]: Created slice kubepods-besteffort-pod76b91a62_eba4_4f78_ada2_8d4a3a2e20ba.slice - libcontainer container kubepods-besteffort-pod76b91a62_eba4_4f78_ada2_8d4a3a2e20ba.slice. May 27 03:16:47.561777 kubelet[2697]: I0527 03:16:47.561387 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxrtz\" (UniqueName: \"kubernetes.io/projected/76b91a62-eba4-4f78-ada2-8d4a3a2e20ba-kube-api-access-rxrtz\") pod \"kube-proxy-d9tgh\" (UID: \"76b91a62-eba4-4f78-ada2-8d4a3a2e20ba\") " pod="kube-system/kube-proxy-d9tgh" May 27 03:16:47.561777 kubelet[2697]: I0527 03:16:47.561468 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76b91a62-eba4-4f78-ada2-8d4a3a2e20ba-kube-proxy\") pod \"kube-proxy-d9tgh\" (UID: \"76b91a62-eba4-4f78-ada2-8d4a3a2e20ba\") " pod="kube-system/kube-proxy-d9tgh" May 27 03:16:47.561777 kubelet[2697]: I0527 03:16:47.561495 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76b91a62-eba4-4f78-ada2-8d4a3a2e20ba-xtables-lock\") pod \"kube-proxy-d9tgh\" (UID: \"76b91a62-eba4-4f78-ada2-8d4a3a2e20ba\") " pod="kube-system/kube-proxy-d9tgh" May 27 03:16:47.561777 kubelet[2697]: I0527 03:16:47.561518 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76b91a62-eba4-4f78-ada2-8d4a3a2e20ba-lib-modules\") pod \"kube-proxy-d9tgh\" (UID: \"76b91a62-eba4-4f78-ada2-8d4a3a2e20ba\") " pod="kube-system/kube-proxy-d9tgh" May 27 03:16:47.693254 kubelet[2697]: E0527 03:16:47.692529 2697 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 03:16:47.693254 kubelet[2697]: E0527 03:16:47.692579 2697 projected.go:194] Error preparing data for projected volume kube-api-access-rxrtz for pod kube-system/kube-proxy-d9tgh: configmap "kube-root-ca.crt" not found May 27 03:16:47.693254 kubelet[2697]: E0527 03:16:47.692667 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/76b91a62-eba4-4f78-ada2-8d4a3a2e20ba-kube-api-access-rxrtz podName:76b91a62-eba4-4f78-ada2-8d4a3a2e20ba nodeName:}" failed. No retries permitted until 2025-05-27 03:16:48.192637705 +0000 UTC m=+6.875399712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxrtz" (UniqueName: "kubernetes.io/projected/76b91a62-eba4-4f78-ada2-8d4a3a2e20ba-kube-api-access-rxrtz") pod "kube-proxy-d9tgh" (UID: "76b91a62-eba4-4f78-ada2-8d4a3a2e20ba") : configmap "kube-root-ca.crt" not found May 27 03:16:47.804383 kubelet[2697]: I0527 03:16:47.803815 2697 status_manager.go:890] "Failed to get status for pod" podUID="143b62a4-e4b5-4fee-b470-293515ad0e4e" pod="tigera-operator/tigera-operator-844669ff44-d6bjr" err="pods \"tigera-operator-844669ff44-d6bjr\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" May 27 03:16:47.804383 kubelet[2697]: W0527 03:16:47.804246 2697 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object May 27 03:16:47.804383 kubelet[2697]: E0527 03:16:47.804278 2697 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 27 03:16:47.804383 kubelet[2697]: W0527 03:16:47.804335 2697 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object May 27 03:16:47.804383 kubelet[2697]: E0527 03:16:47.804352 2697 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 27 03:16:47.814933 systemd[1]: Created slice kubepods-besteffort-pod143b62a4_e4b5_4fee_b470_293515ad0e4e.slice - libcontainer container kubepods-besteffort-pod143b62a4_e4b5_4fee_b470_293515ad0e4e.slice. May 27 03:16:47.864134 kubelet[2697]: I0527 03:16:47.863815 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/143b62a4-e4b5-4fee-b470-293515ad0e4e-var-lib-calico\") pod \"tigera-operator-844669ff44-d6bjr\" (UID: \"143b62a4-e4b5-4fee-b470-293515ad0e4e\") " pod="tigera-operator/tigera-operator-844669ff44-d6bjr" May 27 03:16:47.864134 kubelet[2697]: I0527 03:16:47.863891 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phn8x\" (UniqueName: \"kubernetes.io/projected/143b62a4-e4b5-4fee-b470-293515ad0e4e-kube-api-access-phn8x\") pod \"tigera-operator-844669ff44-d6bjr\" (UID: \"143b62a4-e4b5-4fee-b470-293515ad0e4e\") " pod="tigera-operator/tigera-operator-844669ff44-d6bjr" May 27 03:16:48.433206 kubelet[2697]: E0527 03:16:48.432776 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:48.433886 containerd[1563]: time="2025-05-27T03:16:48.433803174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9tgh,Uid:76b91a62-eba4-4f78-ada2-8d4a3a2e20ba,Namespace:kube-system,Attempt:0,}" May 27 03:16:48.465073 containerd[1563]: time="2025-05-27T03:16:48.465015423Z" level=info msg="connecting to shim 44da46a0e1ff6fdb9111e0639553857724ee483663dba9b8c08b14d079a07924" address="unix:///run/containerd/s/b2a3ed07dab2132857801518e5ff2751412ff7fcfec6aa04a2f278d385abbade" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:48.497473 systemd[1]: Started cri-containerd-44da46a0e1ff6fdb9111e0639553857724ee483663dba9b8c08b14d079a07924.scope - libcontainer container 44da46a0e1ff6fdb9111e0639553857724ee483663dba9b8c08b14d079a07924. May 27 03:16:48.528364 containerd[1563]: time="2025-05-27T03:16:48.528307143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9tgh,Uid:76b91a62-eba4-4f78-ada2-8d4a3a2e20ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"44da46a0e1ff6fdb9111e0639553857724ee483663dba9b8c08b14d079a07924\"" May 27 03:16:48.529455 kubelet[2697]: E0527 03:16:48.529401 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:48.534114 containerd[1563]: time="2025-05-27T03:16:48.533649652Z" level=info msg="CreateContainer within sandbox \"44da46a0e1ff6fdb9111e0639553857724ee483663dba9b8c08b14d079a07924\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:16:48.550095 containerd[1563]: time="2025-05-27T03:16:48.550024546Z" level=info msg="Container 91531ddec0e58e42b78e8eed8d3a6475acdcc7e1bdc919abf08d258683af0b19: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:48.560908 containerd[1563]: time="2025-05-27T03:16:48.560831234Z" level=info msg="CreateContainer within sandbox \"44da46a0e1ff6fdb9111e0639553857724ee483663dba9b8c08b14d079a07924\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91531ddec0e58e42b78e8eed8d3a6475acdcc7e1bdc919abf08d258683af0b19\"" May 27 03:16:48.562138 containerd[1563]: time="2025-05-27T03:16:48.561735956Z" level=info msg="StartContainer for \"91531ddec0e58e42b78e8eed8d3a6475acdcc7e1bdc919abf08d258683af0b19\"" May 27 03:16:48.563590 containerd[1563]: time="2025-05-27T03:16:48.563547213Z" level=info msg="connecting to shim 91531ddec0e58e42b78e8eed8d3a6475acdcc7e1bdc919abf08d258683af0b19" address="unix:///run/containerd/s/b2a3ed07dab2132857801518e5ff2751412ff7fcfec6aa04a2f278d385abbade" protocol=ttrpc version=3 May 27 03:16:48.586310 systemd[1]: Started cri-containerd-91531ddec0e58e42b78e8eed8d3a6475acdcc7e1bdc919abf08d258683af0b19.scope - libcontainer container 91531ddec0e58e42b78e8eed8d3a6475acdcc7e1bdc919abf08d258683af0b19. May 27 03:16:48.641975 containerd[1563]: time="2025-05-27T03:16:48.641928445Z" level=info msg="StartContainer for \"91531ddec0e58e42b78e8eed8d3a6475acdcc7e1bdc919abf08d258683af0b19\" returns successfully" May 27 03:16:49.016266 kubelet[2697]: E0527 03:16:49.016189 2697 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 27 03:16:49.018091 kubelet[2697]: E0527 03:16:49.018039 2697 projected.go:194] Error preparing data for projected volume kube-api-access-phn8x for pod tigera-operator/tigera-operator-844669ff44-d6bjr: failed to sync configmap cache: timed out waiting for the condition May 27 03:16:49.018237 kubelet[2697]: E0527 03:16:49.018186 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/143b62a4-e4b5-4fee-b470-293515ad0e4e-kube-api-access-phn8x podName:143b62a4-e4b5-4fee-b470-293515ad0e4e nodeName:}" failed. No retries permitted until 2025-05-27 03:16:49.518161139 +0000 UTC m=+8.200923146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-phn8x" (UniqueName: "kubernetes.io/projected/143b62a4-e4b5-4fee-b470-293515ad0e4e-kube-api-access-phn8x") pod "tigera-operator-844669ff44-d6bjr" (UID: "143b62a4-e4b5-4fee-b470-293515ad0e4e") : failed to sync configmap cache: timed out waiting for the condition May 27 03:16:49.305645 kubelet[2697]: E0527 03:16:49.305513 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:49.470121 kubelet[2697]: E0527 03:16:49.466853 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:49.470121 kubelet[2697]: E0527 03:16:49.466853 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:49.480795 kubelet[2697]: I0527 03:16:49.480687 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d9tgh" podStartSLOduration=2.480664935 podStartE2EDuration="2.480664935s" podCreationTimestamp="2025-05-27 03:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:16:49.480558074 +0000 UTC m=+8.163320081" watchObservedRunningTime="2025-05-27 03:16:49.480664935 +0000 UTC m=+8.163426942" May 27 03:16:49.634429 containerd[1563]: time="2025-05-27T03:16:49.634283930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-d6bjr,Uid:143b62a4-e4b5-4fee-b470-293515ad0e4e,Namespace:tigera-operator,Attempt:0,}" May 27 03:16:49.678523 containerd[1563]: time="2025-05-27T03:16:49.678480431Z" level=info msg="connecting to shim c5c19c56df6ab2e55fdde3982c46b57d362ed1c04d123af0cf296e0ed3acecc6" address="unix:///run/containerd/s/d499e52dd2c06260e3b6882a0e4a699b99df20cd6a02f7dbff91c2c94a726e6d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:16:49.700249 systemd[1]: Started cri-containerd-c5c19c56df6ab2e55fdde3982c46b57d362ed1c04d123af0cf296e0ed3acecc6.scope - libcontainer container c5c19c56df6ab2e55fdde3982c46b57d362ed1c04d123af0cf296e0ed3acecc6. May 27 03:16:49.748984 containerd[1563]: time="2025-05-27T03:16:49.748929771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-d6bjr,Uid:143b62a4-e4b5-4fee-b470-293515ad0e4e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c5c19c56df6ab2e55fdde3982c46b57d362ed1c04d123af0cf296e0ed3acecc6\"" May 27 03:16:49.751103 containerd[1563]: time="2025-05-27T03:16:49.751049138Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 03:16:50.387219 kubelet[2697]: E0527 03:16:50.387182 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:50.469386 kubelet[2697]: E0527 03:16:50.469140 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:50.469386 kubelet[2697]: E0527 03:16:50.469301 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:51.064801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944324705.mount: Deactivated successfully. May 27 03:16:51.385398 containerd[1563]: time="2025-05-27T03:16:51.385256875Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:51.386334 containerd[1563]: time="2025-05-27T03:16:51.386304725Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 27 03:16:51.387663 containerd[1563]: time="2025-05-27T03:16:51.387607085Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:51.389781 containerd[1563]: time="2025-05-27T03:16:51.389732770Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:16:51.390562 containerd[1563]: time="2025-05-27T03:16:51.390523775Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 1.639428119s" May 27 03:16:51.390611 containerd[1563]: time="2025-05-27T03:16:51.390566386Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 27 03:16:51.392715 containerd[1563]: time="2025-05-27T03:16:51.392687673Z" level=info msg="CreateContainer within sandbox \"c5c19c56df6ab2e55fdde3982c46b57d362ed1c04d123af0cf296e0ed3acecc6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 03:16:51.400623 containerd[1563]: time="2025-05-27T03:16:51.400461179Z" level=info msg="Container 0744a43b54bc00a07b4810fe39c5c28883275ab6c1632b058fbdfb2b9814884e: CDI devices from CRI Config.CDIDevices: []" May 27 03:16:51.408928 containerd[1563]: time="2025-05-27T03:16:51.408867350Z" level=info msg="CreateContainer within sandbox \"c5c19c56df6ab2e55fdde3982c46b57d362ed1c04d123af0cf296e0ed3acecc6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0744a43b54bc00a07b4810fe39c5c28883275ab6c1632b058fbdfb2b9814884e\"" May 27 03:16:51.409617 containerd[1563]: time="2025-05-27T03:16:51.409566942Z" level=info msg="StartContainer for \"0744a43b54bc00a07b4810fe39c5c28883275ab6c1632b058fbdfb2b9814884e\"" May 27 03:16:51.410723 containerd[1563]: time="2025-05-27T03:16:51.410680255Z" level=info msg="connecting to shim 0744a43b54bc00a07b4810fe39c5c28883275ab6c1632b058fbdfb2b9814884e" address="unix:///run/containerd/s/d499e52dd2c06260e3b6882a0e4a699b99df20cd6a02f7dbff91c2c94a726e6d" protocol=ttrpc version=3 May 27 03:16:51.469362 systemd[1]: Started cri-containerd-0744a43b54bc00a07b4810fe39c5c28883275ab6c1632b058fbdfb2b9814884e.scope - libcontainer container 0744a43b54bc00a07b4810fe39c5c28883275ab6c1632b058fbdfb2b9814884e. May 27 03:16:51.478589 kubelet[2697]: E0527 03:16:51.478558 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:51.507559 containerd[1563]: time="2025-05-27T03:16:51.507513514Z" level=info msg="StartContainer for \"0744a43b54bc00a07b4810fe39c5c28883275ab6c1632b058fbdfb2b9814884e\" returns successfully" May 27 03:16:55.824176 kubelet[2697]: E0527 03:16:55.824124 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:16:55.836843 kubelet[2697]: I0527 03:16:55.836767 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-d6bjr" podStartSLOduration=7.195782818 podStartE2EDuration="8.836745848s" podCreationTimestamp="2025-05-27 03:16:47 +0000 UTC" firstStartedPulling="2025-05-27 03:16:49.750454414 +0000 UTC m=+8.433216421" lastFinishedPulling="2025-05-27 03:16:51.391417444 +0000 UTC m=+10.074179451" observedRunningTime="2025-05-27 03:16:52.570982582 +0000 UTC m=+11.253744589" watchObservedRunningTime="2025-05-27 03:16:55.836745848 +0000 UTC m=+14.519507845" May 27 03:16:57.366594 sudo[1765]: pam_unix(sudo:session): session closed for user root May 27 03:16:57.368961 sshd[1764]: Connection closed by 10.0.0.1 port 36912 May 27 03:16:57.369697 sshd-session[1762]: pam_unix(sshd:session): session closed for user core May 27 03:16:57.376475 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:36912.service: Deactivated successfully. May 27 03:16:57.380553 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:16:57.380933 systemd[1]: session-7.scope: Consumed 5.799s CPU time, 227.4M memory peak. May 27 03:16:57.384436 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. May 27 03:16:57.386797 systemd-logind[1547]: Removed session 7. May 27 03:17:00.149326 systemd[1]: Created slice kubepods-besteffort-poda1b55abd_e883_4c5c_811a_92ea5081e6de.slice - libcontainer container kubepods-besteffort-poda1b55abd_e883_4c5c_811a_92ea5081e6de.slice. May 27 03:17:00.255228 kubelet[2697]: I0527 03:17:00.255157 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a1b55abd-e883-4c5c-811a-92ea5081e6de-typha-certs\") pod \"calico-typha-56dd8fbf9d-kt2zv\" (UID: \"a1b55abd-e883-4c5c-811a-92ea5081e6de\") " pod="calico-system/calico-typha-56dd8fbf9d-kt2zv" May 27 03:17:00.255228 kubelet[2697]: I0527 03:17:00.255217 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1b55abd-e883-4c5c-811a-92ea5081e6de-tigera-ca-bundle\") pod \"calico-typha-56dd8fbf9d-kt2zv\" (UID: \"a1b55abd-e883-4c5c-811a-92ea5081e6de\") " pod="calico-system/calico-typha-56dd8fbf9d-kt2zv" May 27 03:17:00.255804 kubelet[2697]: I0527 03:17:00.255244 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfj7\" (UniqueName: \"kubernetes.io/projected/a1b55abd-e883-4c5c-811a-92ea5081e6de-kube-api-access-rkfj7\") pod \"calico-typha-56dd8fbf9d-kt2zv\" (UID: \"a1b55abd-e883-4c5c-811a-92ea5081e6de\") " pod="calico-system/calico-typha-56dd8fbf9d-kt2zv" May 27 03:17:00.460326 kubelet[2697]: E0527 03:17:00.459867 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:00.460691 containerd[1563]: time="2025-05-27T03:17:00.460627008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56dd8fbf9d-kt2zv,Uid:a1b55abd-e883-4c5c-811a-92ea5081e6de,Namespace:calico-system,Attempt:0,}" May 27 03:17:00.531614 systemd[1]: Created slice kubepods-besteffort-pode5224a55_f955_415e_aff7_ba1246f0a0dc.slice - libcontainer container kubepods-besteffort-pode5224a55_f955_415e_aff7_ba1246f0a0dc.slice. May 27 03:17:00.535835 containerd[1563]: time="2025-05-27T03:17:00.535633434Z" level=info msg="connecting to shim 1cfcaaee843616fbcf9ea16d7b6832a0a6edb8b8cfcc54e4e0675e4547b2fe09" address="unix:///run/containerd/s/0f2bf21ab550cb0357252753270c22b550fe3e84ef5ae396ce34776055161d67" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:00.557127 kubelet[2697]: I0527 03:17:00.556866 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e5224a55-f955-415e-aff7-ba1246f0a0dc-node-certs\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.557127 kubelet[2697]: I0527 03:17:00.556926 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-cni-log-dir\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.557127 kubelet[2697]: I0527 03:17:00.556946 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-lib-modules\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.557127 kubelet[2697]: I0527 03:17:00.556966 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-xtables-lock\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.557127 kubelet[2697]: I0527 03:17:00.556989 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4pcr\" (UniqueName: \"kubernetes.io/projected/e5224a55-f955-415e-aff7-ba1246f0a0dc-kube-api-access-l4pcr\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.562649 kubelet[2697]: I0527 03:17:00.557016 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-cni-bin-dir\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.562649 kubelet[2697]: I0527 03:17:00.557037 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-policysync\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.562649 kubelet[2697]: I0527 03:17:00.557054 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-var-run-calico\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.562649 kubelet[2697]: I0527 03:17:00.557074 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-flexvol-driver-host\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.562649 kubelet[2697]: I0527 03:17:00.557123 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-cni-net-dir\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.562964 kubelet[2697]: I0527 03:17:00.557144 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5224a55-f955-415e-aff7-ba1246f0a0dc-tigera-ca-bundle\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.562964 kubelet[2697]: I0527 03:17:00.557171 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5224a55-f955-415e-aff7-ba1246f0a0dc-var-lib-calico\") pod \"calico-node-p4kxw\" (UID: \"e5224a55-f955-415e-aff7-ba1246f0a0dc\") " pod="calico-system/calico-node-p4kxw" May 27 03:17:00.577392 systemd[1]: Started cri-containerd-1cfcaaee843616fbcf9ea16d7b6832a0a6edb8b8cfcc54e4e0675e4547b2fe09.scope - libcontainer container 1cfcaaee843616fbcf9ea16d7b6832a0a6edb8b8cfcc54e4e0675e4547b2fe09. May 27 03:17:00.659986 kubelet[2697]: E0527 03:17:00.659950 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.659986 kubelet[2697]: W0527 03:17:00.659976 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.660441 kubelet[2697]: E0527 03:17:00.660021 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.664052 kubelet[2697]: E0527 03:17:00.664023 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.665069 kubelet[2697]: W0527 03:17:00.664768 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.665069 kubelet[2697]: E0527 03:17:00.664802 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.672832 kubelet[2697]: E0527 03:17:00.672790 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.672832 kubelet[2697]: W0527 03:17:00.672817 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.672832 kubelet[2697]: E0527 03:17:00.672842 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.686442 containerd[1563]: time="2025-05-27T03:17:00.686224644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56dd8fbf9d-kt2zv,Uid:a1b55abd-e883-4c5c-811a-92ea5081e6de,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cfcaaee843616fbcf9ea16d7b6832a0a6edb8b8cfcc54e4e0675e4547b2fe09\"" May 27 03:17:00.687558 kubelet[2697]: E0527 03:17:00.687279 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:00.688855 containerd[1563]: time="2025-05-27T03:17:00.688410991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 03:17:00.792581 kubelet[2697]: E0527 03:17:00.792512 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:00.834268 kubelet[2697]: E0527 03:17:00.834195 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.834268 kubelet[2697]: W0527 03:17:00.834228 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.834268 kubelet[2697]: E0527 03:17:00.834255 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.834593 kubelet[2697]: E0527 03:17:00.834535 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.834593 kubelet[2697]: W0527 03:17:00.834551 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.834593 kubelet[2697]: E0527 03:17:00.834562 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.834876 kubelet[2697]: E0527 03:17:00.834827 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.834876 kubelet[2697]: W0527 03:17:00.834842 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.834876 kubelet[2697]: E0527 03:17:00.834852 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.835329 kubelet[2697]: E0527 03:17:00.835239 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.835329 kubelet[2697]: W0527 03:17:00.835315 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.835415 kubelet[2697]: E0527 03:17:00.835348 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.835871 kubelet[2697]: E0527 03:17:00.835788 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.835912 kubelet[2697]: W0527 03:17:00.835871 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.835912 kubelet[2697]: E0527 03:17:00.835887 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.836282 kubelet[2697]: E0527 03:17:00.836191 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.836282 kubelet[2697]: W0527 03:17:00.836277 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.836428 kubelet[2697]: E0527 03:17:00.836295 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.836681 kubelet[2697]: E0527 03:17:00.836644 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.836681 kubelet[2697]: W0527 03:17:00.836665 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.836749 kubelet[2697]: E0527 03:17:00.836702 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.837109 kubelet[2697]: E0527 03:17:00.836927 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.837109 kubelet[2697]: W0527 03:17:00.836942 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.837109 kubelet[2697]: E0527 03:17:00.836953 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.837325 kubelet[2697]: E0527 03:17:00.837229 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.837325 kubelet[2697]: W0527 03:17:00.837248 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.837325 kubelet[2697]: E0527 03:17:00.837276 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.837926 kubelet[2697]: E0527 03:17:00.837890 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.837926 kubelet[2697]: W0527 03:17:00.837905 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.837926 kubelet[2697]: E0527 03:17:00.837917 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.838246 kubelet[2697]: E0527 03:17:00.838210 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.838246 kubelet[2697]: W0527 03:17:00.838223 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.838246 kubelet[2697]: E0527 03:17:00.838234 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.838541 kubelet[2697]: E0527 03:17:00.838507 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.838541 kubelet[2697]: W0527 03:17:00.838527 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.838541 kubelet[2697]: E0527 03:17:00.838541 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.838850 kubelet[2697]: E0527 03:17:00.838831 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.838850 kubelet[2697]: W0527 03:17:00.838845 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.838937 kubelet[2697]: E0527 03:17:00.838857 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.839930 kubelet[2697]: E0527 03:17:00.839874 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.839930 kubelet[2697]: W0527 03:17:00.839916 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.840027 kubelet[2697]: E0527 03:17:00.839953 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.840324 kubelet[2697]: E0527 03:17:00.840293 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.840324 kubelet[2697]: W0527 03:17:00.840309 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.840324 kubelet[2697]: E0527 03:17:00.840323 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.840662 kubelet[2697]: E0527 03:17:00.840630 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.840662 kubelet[2697]: W0527 03:17:00.840644 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.840662 kubelet[2697]: E0527 03:17:00.840654 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.840960 kubelet[2697]: E0527 03:17:00.840937 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.840960 kubelet[2697]: W0527 03:17:00.840954 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.841060 kubelet[2697]: E0527 03:17:00.840966 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.841269 kubelet[2697]: E0527 03:17:00.841226 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.841269 kubelet[2697]: W0527 03:17:00.841242 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.841269 kubelet[2697]: E0527 03:17:00.841252 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.841505 kubelet[2697]: E0527 03:17:00.841451 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.841505 kubelet[2697]: W0527 03:17:00.841460 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.841505 kubelet[2697]: E0527 03:17:00.841469 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.841728 kubelet[2697]: E0527 03:17:00.841704 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.841728 kubelet[2697]: W0527 03:17:00.841718 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.841781 kubelet[2697]: E0527 03:17:00.841729 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.842591 containerd[1563]: time="2025-05-27T03:17:00.842544943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4kxw,Uid:e5224a55-f955-415e-aff7-ba1246f0a0dc,Namespace:calico-system,Attempt:0,}" May 27 03:17:00.862985 kubelet[2697]: E0527 03:17:00.861899 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.862985 kubelet[2697]: W0527 03:17:00.861950 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.862985 kubelet[2697]: E0527 03:17:00.861973 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.862985 kubelet[2697]: I0527 03:17:00.862022 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrjj\" (UniqueName: \"kubernetes.io/projected/55c9e866-18db-4ca8-a823-aa7f4c344902-kube-api-access-chrjj\") pod \"csi-node-driver-qdwt9\" (UID: \"55c9e866-18db-4ca8-a823-aa7f4c344902\") " pod="calico-system/csi-node-driver-qdwt9" May 27 03:17:00.862985 kubelet[2697]: E0527 03:17:00.862301 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.862985 kubelet[2697]: W0527 03:17:00.862332 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.862985 kubelet[2697]: E0527 03:17:00.862344 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.862985 kubelet[2697]: I0527 03:17:00.862364 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55c9e866-18db-4ca8-a823-aa7f4c344902-registration-dir\") pod \"csi-node-driver-qdwt9\" (UID: \"55c9e866-18db-4ca8-a823-aa7f4c344902\") " pod="calico-system/csi-node-driver-qdwt9" May 27 03:17:00.862985 kubelet[2697]: E0527 03:17:00.862590 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.863812 kubelet[2697]: W0527 03:17:00.862601 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.863812 kubelet[2697]: E0527 03:17:00.862611 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.863812 kubelet[2697]: I0527 03:17:00.862628 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/55c9e866-18db-4ca8-a823-aa7f4c344902-varrun\") pod \"csi-node-driver-qdwt9\" (UID: \"55c9e866-18db-4ca8-a823-aa7f4c344902\") " pod="calico-system/csi-node-driver-qdwt9" May 27 03:17:00.863812 kubelet[2697]: E0527 03:17:00.863647 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.863812 kubelet[2697]: W0527 03:17:00.863671 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.863812 kubelet[2697]: E0527 03:17:00.863714 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.864746 kubelet[2697]: E0527 03:17:00.864727 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.864746 kubelet[2697]: W0527 03:17:00.864745 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.864862 kubelet[2697]: E0527 03:17:00.864793 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.865021 kubelet[2697]: I0527 03:17:00.864833 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55c9e866-18db-4ca8-a823-aa7f4c344902-kubelet-dir\") pod \"csi-node-driver-qdwt9\" (UID: \"55c9e866-18db-4ca8-a823-aa7f4c344902\") " pod="calico-system/csi-node-driver-qdwt9" May 27 03:17:00.865681 kubelet[2697]: E0527 03:17:00.865227 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.865681 kubelet[2697]: W0527 03:17:00.865242 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.865681 kubelet[2697]: E0527 03:17:00.865281 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.865681 kubelet[2697]: E0527 03:17:00.865676 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.865797 kubelet[2697]: W0527 03:17:00.865686 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.865797 kubelet[2697]: E0527 03:17:00.865706 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.865929 kubelet[2697]: E0527 03:17:00.865915 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.865929 kubelet[2697]: W0527 03:17:00.865927 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.866009 kubelet[2697]: E0527 03:17:00.865938 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.866238 kubelet[2697]: E0527 03:17:00.866224 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.866238 kubelet[2697]: W0527 03:17:00.866236 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.866339 kubelet[2697]: E0527 03:17:00.866295 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.866339 kubelet[2697]: I0527 03:17:00.866321 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55c9e866-18db-4ca8-a823-aa7f4c344902-socket-dir\") pod \"csi-node-driver-qdwt9\" (UID: \"55c9e866-18db-4ca8-a823-aa7f4c344902\") " pod="calico-system/csi-node-driver-qdwt9" May 27 03:17:00.866621 kubelet[2697]: E0527 03:17:00.866605 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.866668 kubelet[2697]: W0527 03:17:00.866624 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.866668 kubelet[2697]: E0527 03:17:00.866641 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.866951 kubelet[2697]: E0527 03:17:00.866868 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.866951 kubelet[2697]: W0527 03:17:00.866885 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.866951 kubelet[2697]: E0527 03:17:00.866898 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.867348 kubelet[2697]: E0527 03:17:00.867313 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.867451 kubelet[2697]: W0527 03:17:00.867346 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.867451 kubelet[2697]: E0527 03:17:00.867378 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.867630 kubelet[2697]: E0527 03:17:00.867611 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.867630 kubelet[2697]: W0527 03:17:00.867625 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.867740 kubelet[2697]: E0527 03:17:00.867639 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.867890 kubelet[2697]: E0527 03:17:00.867874 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.867890 kubelet[2697]: W0527 03:17:00.867889 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.867976 kubelet[2697]: E0527 03:17:00.867900 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.868226 kubelet[2697]: E0527 03:17:00.868205 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.868226 kubelet[2697]: W0527 03:17:00.868218 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.868226 kubelet[2697]: E0527 03:17:00.868229 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.898133 containerd[1563]: time="2025-05-27T03:17:00.898043442Z" level=info msg="connecting to shim c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8" address="unix:///run/containerd/s/670e1142f3f13d8473e7476b7d0658db0614a200c9c3a90eefaa5c95a689390f" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:00.938527 systemd[1]: Started cri-containerd-c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8.scope - libcontainer container c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8. May 27 03:17:00.968592 kubelet[2697]: E0527 03:17:00.968546 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.968592 kubelet[2697]: W0527 03:17:00.968576 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.968786 kubelet[2697]: E0527 03:17:00.968623 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.968963 kubelet[2697]: E0527 03:17:00.968947 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.968963 kubelet[2697]: W0527 03:17:00.968961 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.969035 kubelet[2697]: E0527 03:17:00.968979 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.969336 kubelet[2697]: E0527 03:17:00.969302 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.969336 kubelet[2697]: W0527 03:17:00.969319 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.969422 kubelet[2697]: E0527 03:17:00.969341 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.969751 kubelet[2697]: E0527 03:17:00.969713 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.969751 kubelet[2697]: W0527 03:17:00.969741 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.969851 kubelet[2697]: E0527 03:17:00.969774 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.970214 kubelet[2697]: E0527 03:17:00.970109 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.970214 kubelet[2697]: W0527 03:17:00.970123 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.970214 kubelet[2697]: E0527 03:17:00.970144 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.970489 kubelet[2697]: E0527 03:17:00.970468 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.970489 kubelet[2697]: W0527 03:17:00.970483 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.970580 kubelet[2697]: E0527 03:17:00.970494 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.970866 kubelet[2697]: E0527 03:17:00.970851 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.970866 kubelet[2697]: W0527 03:17:00.970862 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.971031 kubelet[2697]: E0527 03:17:00.971007 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.971106 kubelet[2697]: E0527 03:17:00.971093 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.971136 kubelet[2697]: W0527 03:17:00.971105 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.971163 kubelet[2697]: E0527 03:17:00.971155 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.971374 kubelet[2697]: E0527 03:17:00.971355 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.971374 kubelet[2697]: W0527 03:17:00.971367 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.971453 kubelet[2697]: E0527 03:17:00.971419 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.971685 kubelet[2697]: E0527 03:17:00.971637 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.971777 kubelet[2697]: W0527 03:17:00.971755 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.971885 kubelet[2697]: E0527 03:17:00.971874 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.972163 kubelet[2697]: E0527 03:17:00.972142 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.972313 kubelet[2697]: W0527 03:17:00.972238 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.972313 kubelet[2697]: E0527 03:17:00.972273 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.972581 kubelet[2697]: E0527 03:17:00.972570 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.972676 kubelet[2697]: W0527 03:17:00.972658 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.972910 kubelet[2697]: E0527 03:17:00.972863 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.973108 kubelet[2697]: E0527 03:17:00.973053 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.973188 kubelet[2697]: W0527 03:17:00.973166 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.973465 kubelet[2697]: E0527 03:17:00.973237 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.973975 kubelet[2697]: E0527 03:17:00.973943 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.974028 kubelet[2697]: W0527 03:17:00.973978 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.974252 kubelet[2697]: E0527 03:17:00.974178 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.975120 kubelet[2697]: E0527 03:17:00.974848 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.975120 kubelet[2697]: W0527 03:17:00.974861 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.975120 kubelet[2697]: E0527 03:17:00.974978 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.975328 kubelet[2697]: E0527 03:17:00.975297 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.975328 kubelet[2697]: W0527 03:17:00.975313 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.975485 kubelet[2697]: E0527 03:17:00.975469 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.976399 kubelet[2697]: E0527 03:17:00.976374 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.976399 kubelet[2697]: W0527 03:17:00.976385 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.976537 kubelet[2697]: E0527 03:17:00.976514 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.976795 kubelet[2697]: E0527 03:17:00.976772 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.976795 kubelet[2697]: W0527 03:17:00.976782 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.976929 kubelet[2697]: E0527 03:17:00.976909 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.977740 kubelet[2697]: E0527 03:17:00.977717 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.977740 kubelet[2697]: W0527 03:17:00.977729 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.977815 kubelet[2697]: E0527 03:17:00.977792 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.977940 kubelet[2697]: E0527 03:17:00.977922 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.977940 kubelet[2697]: W0527 03:17:00.977931 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.978000 kubelet[2697]: E0527 03:17:00.977985 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.978149 kubelet[2697]: E0527 03:17:00.978122 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.978149 kubelet[2697]: W0527 03:17:00.978132 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.978149 kubelet[2697]: E0527 03:17:00.978141 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.978391 kubelet[2697]: E0527 03:17:00.978379 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.978391 kubelet[2697]: W0527 03:17:00.978388 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.978444 kubelet[2697]: E0527 03:17:00.978407 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.978586 kubelet[2697]: E0527 03:17:00.978575 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.978586 kubelet[2697]: W0527 03:17:00.978584 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.978633 kubelet[2697]: E0527 03:17:00.978605 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.978764 kubelet[2697]: E0527 03:17:00.978753 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.978764 kubelet[2697]: W0527 03:17:00.978761 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.978821 kubelet[2697]: E0527 03:17:00.978769 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.978996 kubelet[2697]: E0527 03:17:00.978984 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.978996 kubelet[2697]: W0527 03:17:00.978993 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.979048 kubelet[2697]: E0527 03:17:00.979001 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:00.981285 kubelet[2697]: E0527 03:17:00.981272 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:00.981285 kubelet[2697]: W0527 03:17:00.981281 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:00.981353 kubelet[2697]: E0527 03:17:00.981290 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:01.025549 containerd[1563]: time="2025-05-27T03:17:01.025470571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4kxw,Uid:e5224a55-f955-415e-aff7-ba1246f0a0dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8\"" May 27 03:17:02.144993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084676287.mount: Deactivated successfully. May 27 03:17:02.419956 kubelet[2697]: E0527 03:17:02.419796 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:03.674679 containerd[1563]: time="2025-05-27T03:17:03.674607666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:03.675677 containerd[1563]: time="2025-05-27T03:17:03.675601707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 27 03:17:03.677042 containerd[1563]: time="2025-05-27T03:17:03.676993275Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:03.679627 containerd[1563]: time="2025-05-27T03:17:03.679587137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:03.680117 containerd[1563]: time="2025-05-27T03:17:03.680053264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.991607658s" May 27 03:17:03.680117 containerd[1563]: time="2025-05-27T03:17:03.680116843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 27 03:17:03.689738 containerd[1563]: time="2025-05-27T03:17:03.689689407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 03:17:03.704887 containerd[1563]: time="2025-05-27T03:17:03.704834095Z" level=info msg="CreateContainer within sandbox \"1cfcaaee843616fbcf9ea16d7b6832a0a6edb8b8cfcc54e4e0675e4547b2fe09\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 03:17:03.721106 containerd[1563]: time="2025-05-27T03:17:03.718951661Z" level=info msg="Container 7c787375733b265ae6b74f0936d099d2c6b35fc6fa0476c999f9691a7453d922: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:03.730127 containerd[1563]: time="2025-05-27T03:17:03.729928887Z" level=info msg="CreateContainer within sandbox \"1cfcaaee843616fbcf9ea16d7b6832a0a6edb8b8cfcc54e4e0675e4547b2fe09\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c787375733b265ae6b74f0936d099d2c6b35fc6fa0476c999f9691a7453d922\"" May 27 03:17:03.731307 containerd[1563]: time="2025-05-27T03:17:03.731273688Z" level=info msg="StartContainer for \"7c787375733b265ae6b74f0936d099d2c6b35fc6fa0476c999f9691a7453d922\"" May 27 03:17:03.732633 containerd[1563]: time="2025-05-27T03:17:03.732594733Z" level=info msg="connecting to shim 7c787375733b265ae6b74f0936d099d2c6b35fc6fa0476c999f9691a7453d922" address="unix:///run/containerd/s/0f2bf21ab550cb0357252753270c22b550fe3e84ef5ae396ce34776055161d67" protocol=ttrpc version=3 May 27 03:17:03.770874 systemd[1]: Started cri-containerd-7c787375733b265ae6b74f0936d099d2c6b35fc6fa0476c999f9691a7453d922.scope - libcontainer container 7c787375733b265ae6b74f0936d099d2c6b35fc6fa0476c999f9691a7453d922. May 27 03:17:03.831986 containerd[1563]: time="2025-05-27T03:17:03.831941949Z" level=info msg="StartContainer for \"7c787375733b265ae6b74f0936d099d2c6b35fc6fa0476c999f9691a7453d922\" returns successfully" May 27 03:17:04.420264 kubelet[2697]: E0527 03:17:04.420191 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:04.508170 kubelet[2697]: E0527 03:17:04.508137 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:04.571819 kubelet[2697]: E0527 03:17:04.571763 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.571819 kubelet[2697]: W0527 03:17:04.571795 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.577823 kubelet[2697]: E0527 03:17:04.577774 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.578124 kubelet[2697]: E0527 03:17:04.578036 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.578124 kubelet[2697]: W0527 03:17:04.578055 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.578124 kubelet[2697]: E0527 03:17:04.578071 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.578356 kubelet[2697]: E0527 03:17:04.578285 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.578356 kubelet[2697]: W0527 03:17:04.578294 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.578356 kubelet[2697]: E0527 03:17:04.578304 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.578541 kubelet[2697]: E0527 03:17:04.578517 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.578541 kubelet[2697]: W0527 03:17:04.578527 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.578541 kubelet[2697]: E0527 03:17:04.578535 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.578735 kubelet[2697]: E0527 03:17:04.578715 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.578735 kubelet[2697]: W0527 03:17:04.578726 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.578735 kubelet[2697]: E0527 03:17:04.578734 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.578900 kubelet[2697]: E0527 03:17:04.578880 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.578900 kubelet[2697]: W0527 03:17:04.578890 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.578900 kubelet[2697]: E0527 03:17:04.578896 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.579050 kubelet[2697]: E0527 03:17:04.579032 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.579050 kubelet[2697]: W0527 03:17:04.579048 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.579125 kubelet[2697]: E0527 03:17:04.579058 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.579300 kubelet[2697]: E0527 03:17:04.579274 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.579300 kubelet[2697]: W0527 03:17:04.579286 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.579300 kubelet[2697]: E0527 03:17:04.579296 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.579477 kubelet[2697]: E0527 03:17:04.579461 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.579477 kubelet[2697]: W0527 03:17:04.579472 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.579548 kubelet[2697]: E0527 03:17:04.579481 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.579652 kubelet[2697]: E0527 03:17:04.579635 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.579652 kubelet[2697]: W0527 03:17:04.579646 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.579707 kubelet[2697]: E0527 03:17:04.579655 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.579812 kubelet[2697]: E0527 03:17:04.579793 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.579812 kubelet[2697]: W0527 03:17:04.579801 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.579812 kubelet[2697]: E0527 03:17:04.579808 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.579971 kubelet[2697]: E0527 03:17:04.579954 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.579971 kubelet[2697]: W0527 03:17:04.579964 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.580030 kubelet[2697]: E0527 03:17:04.579973 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.580161 kubelet[2697]: E0527 03:17:04.580148 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.580161 kubelet[2697]: W0527 03:17:04.580158 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.580229 kubelet[2697]: E0527 03:17:04.580166 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.580324 kubelet[2697]: E0527 03:17:04.580313 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.580324 kubelet[2697]: W0527 03:17:04.580321 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.580373 kubelet[2697]: E0527 03:17:04.580330 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.580507 kubelet[2697]: E0527 03:17:04.580489 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.580507 kubelet[2697]: W0527 03:17:04.580504 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.580582 kubelet[2697]: E0527 03:17:04.580518 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.600072 kubelet[2697]: E0527 03:17:04.600013 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.600072 kubelet[2697]: W0527 03:17:04.600040 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.600072 kubelet[2697]: E0527 03:17:04.600070 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.600406 kubelet[2697]: E0527 03:17:04.600377 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.600406 kubelet[2697]: W0527 03:17:04.600392 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.600406 kubelet[2697]: E0527 03:17:04.600406 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.600721 kubelet[2697]: E0527 03:17:04.600697 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.600721 kubelet[2697]: W0527 03:17:04.600715 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.600815 kubelet[2697]: E0527 03:17:04.600738 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.601048 kubelet[2697]: E0527 03:17:04.601015 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.601103 kubelet[2697]: W0527 03:17:04.601045 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.601103 kubelet[2697]: E0527 03:17:04.601097 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.601354 kubelet[2697]: E0527 03:17:04.601338 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.601354 kubelet[2697]: W0527 03:17:04.601350 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.601414 kubelet[2697]: E0527 03:17:04.601364 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.601602 kubelet[2697]: E0527 03:17:04.601583 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.601629 kubelet[2697]: W0527 03:17:04.601600 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.601629 kubelet[2697]: E0527 03:17:04.601620 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.601910 kubelet[2697]: E0527 03:17:04.601887 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.601910 kubelet[2697]: W0527 03:17:04.601901 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.601991 kubelet[2697]: E0527 03:17:04.601934 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.602131 kubelet[2697]: E0527 03:17:04.602114 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.602131 kubelet[2697]: W0527 03:17:04.602128 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.602227 kubelet[2697]: E0527 03:17:04.602159 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.602397 kubelet[2697]: E0527 03:17:04.602382 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.602397 kubelet[2697]: W0527 03:17:04.602393 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.602482 kubelet[2697]: E0527 03:17:04.602408 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.602681 kubelet[2697]: E0527 03:17:04.602661 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.602681 kubelet[2697]: W0527 03:17:04.602676 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.602737 kubelet[2697]: E0527 03:17:04.602691 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.602869 kubelet[2697]: E0527 03:17:04.602851 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.602869 kubelet[2697]: W0527 03:17:04.602861 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.602927 kubelet[2697]: E0527 03:17:04.602873 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.603063 kubelet[2697]: E0527 03:17:04.603045 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.603099 kubelet[2697]: W0527 03:17:04.603061 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.603129 kubelet[2697]: E0527 03:17:04.603103 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.603299 kubelet[2697]: E0527 03:17:04.603284 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.603299 kubelet[2697]: W0527 03:17:04.603294 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.603361 kubelet[2697]: E0527 03:17:04.603306 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.603534 kubelet[2697]: E0527 03:17:04.603513 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.603534 kubelet[2697]: W0527 03:17:04.603528 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.603597 kubelet[2697]: E0527 03:17:04.603541 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.603791 kubelet[2697]: E0527 03:17:04.603773 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.603791 kubelet[2697]: W0527 03:17:04.603787 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.603852 kubelet[2697]: E0527 03:17:04.603802 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.604143 kubelet[2697]: E0527 03:17:04.604128 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.604143 kubelet[2697]: W0527 03:17:04.604140 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.604230 kubelet[2697]: E0527 03:17:04.604155 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.604509 kubelet[2697]: E0527 03:17:04.604481 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.604509 kubelet[2697]: W0527 03:17:04.604498 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.604569 kubelet[2697]: E0527 03:17:04.604519 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:04.604748 kubelet[2697]: E0527 03:17:04.604731 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:04.604748 kubelet[2697]: W0527 03:17:04.604745 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:04.604793 kubelet[2697]: E0527 03:17:04.604757 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.511022 kubelet[2697]: I0527 03:17:05.510986 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:17:05.511680 kubelet[2697]: E0527 03:17:05.511516 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:05.586059 kubelet[2697]: E0527 03:17:05.586010 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.586059 kubelet[2697]: W0527 03:17:05.586037 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.586059 kubelet[2697]: E0527 03:17:05.586060 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.586306 kubelet[2697]: E0527 03:17:05.586263 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.586306 kubelet[2697]: W0527 03:17:05.586270 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.586306 kubelet[2697]: E0527 03:17:05.586278 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.586448 kubelet[2697]: E0527 03:17:05.586435 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.586448 kubelet[2697]: W0527 03:17:05.586443 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.586509 kubelet[2697]: E0527 03:17:05.586450 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.586607 kubelet[2697]: E0527 03:17:05.586594 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.586607 kubelet[2697]: W0527 03:17:05.586602 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.586678 kubelet[2697]: E0527 03:17:05.586609 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.586787 kubelet[2697]: E0527 03:17:05.586770 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.586826 kubelet[2697]: W0527 03:17:05.586785 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.586826 kubelet[2697]: E0527 03:17:05.586798 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.586988 kubelet[2697]: E0527 03:17:05.586962 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.586988 kubelet[2697]: W0527 03:17:05.586971 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.586988 kubelet[2697]: E0527 03:17:05.586978 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.587161 kubelet[2697]: E0527 03:17:05.587147 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.587161 kubelet[2697]: W0527 03:17:05.587159 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.587233 kubelet[2697]: E0527 03:17:05.587185 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.587366 kubelet[2697]: E0527 03:17:05.587350 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.587366 kubelet[2697]: W0527 03:17:05.587361 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.587527 kubelet[2697]: E0527 03:17:05.587371 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.587570 kubelet[2697]: E0527 03:17:05.587531 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.587570 kubelet[2697]: W0527 03:17:05.587537 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.587570 kubelet[2697]: E0527 03:17:05.587544 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.587939 kubelet[2697]: E0527 03:17:05.587715 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.587939 kubelet[2697]: W0527 03:17:05.587783 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.587939 kubelet[2697]: E0527 03:17:05.587794 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.588098 kubelet[2697]: E0527 03:17:05.588074 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.588154 kubelet[2697]: W0527 03:17:05.588104 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.588154 kubelet[2697]: E0527 03:17:05.588117 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.588446 kubelet[2697]: E0527 03:17:05.588327 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.588446 kubelet[2697]: W0527 03:17:05.588350 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.588446 kubelet[2697]: E0527 03:17:05.588360 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.588638 kubelet[2697]: E0527 03:17:05.588626 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.588706 kubelet[2697]: W0527 03:17:05.588679 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.588706 kubelet[2697]: E0527 03:17:05.588692 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.588976 kubelet[2697]: E0527 03:17:05.588852 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.588976 kubelet[2697]: W0527 03:17:05.588865 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.588976 kubelet[2697]: E0527 03:17:05.588877 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.589291 kubelet[2697]: E0527 03:17:05.589275 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.589291 kubelet[2697]: W0527 03:17:05.589288 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.589379 kubelet[2697]: E0527 03:17:05.589298 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.609758 kubelet[2697]: E0527 03:17:05.609717 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.609758 kubelet[2697]: W0527 03:17:05.609749 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.610005 kubelet[2697]: E0527 03:17:05.609780 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.610195 kubelet[2697]: E0527 03:17:05.610164 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.610195 kubelet[2697]: W0527 03:17:05.610191 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.610298 kubelet[2697]: E0527 03:17:05.610204 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.610520 kubelet[2697]: E0527 03:17:05.610500 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.610520 kubelet[2697]: W0527 03:17:05.610514 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.610624 kubelet[2697]: E0527 03:17:05.610532 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.610773 kubelet[2697]: E0527 03:17:05.610716 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.610773 kubelet[2697]: W0527 03:17:05.610756 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.610773 kubelet[2697]: E0527 03:17:05.610770 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.611742 kubelet[2697]: E0527 03:17:05.611118 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.611742 kubelet[2697]: W0527 03:17:05.611265 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.611742 kubelet[2697]: E0527 03:17:05.611284 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.611742 kubelet[2697]: E0527 03:17:05.611533 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.611742 kubelet[2697]: W0527 03:17:05.611544 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.611742 kubelet[2697]: E0527 03:17:05.611586 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.611948 kubelet[2697]: E0527 03:17:05.611758 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.611948 kubelet[2697]: W0527 03:17:05.611767 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.611948 kubelet[2697]: E0527 03:17:05.611815 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.612036 kubelet[2697]: E0527 03:17:05.612016 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.612036 kubelet[2697]: W0527 03:17:05.612025 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.612112 kubelet[2697]: E0527 03:17:05.612039 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.612293 kubelet[2697]: E0527 03:17:05.612259 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.612293 kubelet[2697]: W0527 03:17:05.612284 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.612380 kubelet[2697]: E0527 03:17:05.612302 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.612512 kubelet[2697]: E0527 03:17:05.612483 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.612512 kubelet[2697]: W0527 03:17:05.612497 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.612565 kubelet[2697]: E0527 03:17:05.612514 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.612743 kubelet[2697]: E0527 03:17:05.612729 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.612789 kubelet[2697]: W0527 03:17:05.612753 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.612789 kubelet[2697]: E0527 03:17:05.612772 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.613128 kubelet[2697]: E0527 03:17:05.613060 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.613128 kubelet[2697]: W0527 03:17:05.613073 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.613128 kubelet[2697]: E0527 03:17:05.613107 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.613422 kubelet[2697]: E0527 03:17:05.613389 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.613422 kubelet[2697]: W0527 03:17:05.613406 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.613422 kubelet[2697]: E0527 03:17:05.613424 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.613624 kubelet[2697]: E0527 03:17:05.613606 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.613624 kubelet[2697]: W0527 03:17:05.613616 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.613670 kubelet[2697]: E0527 03:17:05.613636 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.613959 kubelet[2697]: E0527 03:17:05.613942 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.613959 kubelet[2697]: W0527 03:17:05.613957 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.614012 kubelet[2697]: E0527 03:17:05.613975 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.614202 kubelet[2697]: E0527 03:17:05.614185 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.614202 kubelet[2697]: W0527 03:17:05.614200 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.614260 kubelet[2697]: E0527 03:17:05.614212 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.614438 kubelet[2697]: E0527 03:17:05.614419 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.614438 kubelet[2697]: W0527 03:17:05.614433 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.614516 kubelet[2697]: E0527 03:17:05.614445 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.615232 kubelet[2697]: E0527 03:17:05.615199 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 03:17:05.615232 kubelet[2697]: W0527 03:17:05.615222 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 03:17:05.615232 kubelet[2697]: E0527 03:17:05.615234 2697 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 03:17:05.745829 containerd[1563]: time="2025-05-27T03:17:05.745758005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:05.747160 containerd[1563]: time="2025-05-27T03:17:05.747132310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 27 03:17:05.749228 containerd[1563]: time="2025-05-27T03:17:05.749110651Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:05.751618 containerd[1563]: time="2025-05-27T03:17:05.751551592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:05.752254 containerd[1563]: time="2025-05-27T03:17:05.752204400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.062281434s" May 27 03:17:05.752254 containerd[1563]: time="2025-05-27T03:17:05.752253182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 27 03:17:05.755803 containerd[1563]: time="2025-05-27T03:17:05.755738517Z" level=info msg="CreateContainer within sandbox \"c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 03:17:05.767680 containerd[1563]: time="2025-05-27T03:17:05.767604648Z" level=info msg="Container b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:05.779938 containerd[1563]: time="2025-05-27T03:17:05.779883794Z" level=info msg="CreateContainer within sandbox \"c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639\"" May 27 03:17:05.780590 containerd[1563]: time="2025-05-27T03:17:05.780536843Z" level=info msg="StartContainer for \"b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639\"" May 27 03:17:05.782299 containerd[1563]: time="2025-05-27T03:17:05.782266546Z" level=info msg="connecting to shim b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639" address="unix:///run/containerd/s/670e1142f3f13d8473e7476b7d0658db0614a200c9c3a90eefaa5c95a689390f" protocol=ttrpc version=3 May 27 03:17:05.810322 systemd[1]: Started cri-containerd-b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639.scope - libcontainer container b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639. May 27 03:17:05.872391 systemd[1]: cri-containerd-b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639.scope: Deactivated successfully. May 27 03:17:05.875275 containerd[1563]: time="2025-05-27T03:17:05.875227975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639\" id:\"b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639\" pid:3425 exited_at:{seconds:1748315825 nanos:874641661}" May 27 03:17:06.117415 containerd[1563]: time="2025-05-27T03:17:06.117277784Z" level=info msg="received exit event container_id:\"b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639\" id:\"b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639\" pid:3425 exited_at:{seconds:1748315825 nanos:874641661}" May 27 03:17:06.130249 containerd[1563]: time="2025-05-27T03:17:06.130197140Z" level=info msg="StartContainer for \"b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639\" returns successfully" May 27 03:17:06.147546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b062be3932c2f0a91271aa9fefd1d10656537c1ca7dfa88beefe4b2d8694d639-rootfs.mount: Deactivated successfully. May 27 03:17:06.467720 kubelet[2697]: E0527 03:17:06.420575 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:06.569671 kubelet[2697]: I0527 03:17:06.569575 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56dd8fbf9d-kt2zv" podStartSLOduration=3.56842501 podStartE2EDuration="6.569541451s" podCreationTimestamp="2025-05-27 03:17:00 +0000 UTC" firstStartedPulling="2025-05-27 03:17:00.688072473 +0000 UTC m=+19.370834480" lastFinishedPulling="2025-05-27 03:17:03.689188914 +0000 UTC m=+22.371950921" observedRunningTime="2025-05-27 03:17:04.521053537 +0000 UTC m=+23.203815544" watchObservedRunningTime="2025-05-27 03:17:06.569541451 +0000 UTC m=+25.252303468" May 27 03:17:07.519820 containerd[1563]: time="2025-05-27T03:17:07.519776976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 03:17:08.420676 kubelet[2697]: E0527 03:17:08.420609 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:10.420031 kubelet[2697]: E0527 03:17:10.419977 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:11.489938 containerd[1563]: time="2025-05-27T03:17:11.489856108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:11.490850 containerd[1563]: time="2025-05-27T03:17:11.490817395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 27 03:17:11.492279 containerd[1563]: time="2025-05-27T03:17:11.492242061Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:11.494863 containerd[1563]: time="2025-05-27T03:17:11.494805980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:11.495560 containerd[1563]: time="2025-05-27T03:17:11.495473044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.97565449s" May 27 03:17:11.495560 containerd[1563]: time="2025-05-27T03:17:11.495517467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 27 03:17:11.498007 containerd[1563]: time="2025-05-27T03:17:11.497967010Z" level=info msg="CreateContainer within sandbox \"c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 03:17:11.508050 containerd[1563]: time="2025-05-27T03:17:11.507983824Z" level=info msg="Container 235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:11.523373 containerd[1563]: time="2025-05-27T03:17:11.523299274Z" level=info msg="CreateContainer within sandbox \"c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9\"" May 27 03:17:11.526045 containerd[1563]: time="2025-05-27T03:17:11.523999640Z" level=info msg="StartContainer for \"235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9\"" May 27 03:17:11.526045 containerd[1563]: time="2025-05-27T03:17:11.525963750Z" level=info msg="connecting to shim 235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9" address="unix:///run/containerd/s/670e1142f3f13d8473e7476b7d0658db0614a200c9c3a90eefaa5c95a689390f" protocol=ttrpc version=3 May 27 03:17:11.563374 systemd[1]: Started cri-containerd-235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9.scope - libcontainer container 235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9. May 27 03:17:11.623165 containerd[1563]: time="2025-05-27T03:17:11.622974160Z" level=info msg="StartContainer for \"235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9\" returns successfully" May 27 03:17:12.420630 kubelet[2697]: E0527 03:17:12.420563 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:13.969616 containerd[1563]: time="2025-05-27T03:17:13.969480009Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:17:13.972716 systemd[1]: cri-containerd-235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9.scope: Deactivated successfully. May 27 03:17:13.973190 systemd[1]: cri-containerd-235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9.scope: Consumed 640ms CPU time, 175.2M memory peak, 3.9M read from disk, 170.9M written to disk. May 27 03:17:13.973997 containerd[1563]: time="2025-05-27T03:17:13.973793353Z" level=info msg="received exit event container_id:\"235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9\" id:\"235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9\" pid:3486 exited_at:{seconds:1748315833 nanos:973509310}" May 27 03:17:13.974334 containerd[1563]: time="2025-05-27T03:17:13.974309734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9\" id:\"235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9\" pid:3486 exited_at:{seconds:1748315833 nanos:973509310}" May 27 03:17:14.003003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-235328529eae7ca1ed1c264c0864d686935f74016ca3a121c9c0cdf2c808dff9-rootfs.mount: Deactivated successfully. May 27 03:17:14.044125 kubelet[2697]: I0527 03:17:14.043724 2697 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:17:14.261909 kubelet[2697]: W0527 03:17:14.261726 2697 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object May 27 03:17:14.261909 kubelet[2697]: E0527 03:17:14.261778 2697 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 27 03:17:14.266272 kubelet[2697]: W0527 03:17:14.266076 2697 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object May 27 03:17:14.266389 kubelet[2697]: E0527 03:17:14.266269 2697 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 27 03:17:14.267813 systemd[1]: Created slice kubepods-besteffort-pod1f29b750_f6dd_477a_b679_fd6d52c678b0.slice - libcontainer container kubepods-besteffort-pod1f29b750_f6dd_477a_b679_fd6d52c678b0.slice. May 27 03:17:14.277659 systemd[1]: Created slice kubepods-besteffort-podd889009d_a1c4_4536_b30f_9af18f8da291.slice - libcontainer container kubepods-besteffort-podd889009d_a1c4_4536_b30f_9af18f8da291.slice. May 27 03:17:14.287340 systemd[1]: Created slice kubepods-burstable-podc880cb5f_e0ce_42d6_a0d4_a4c7c967a072.slice - libcontainer container kubepods-burstable-podc880cb5f_e0ce_42d6_a0d4_a4c7c967a072.slice. May 27 03:17:14.297103 systemd[1]: Created slice kubepods-besteffort-pod39fd061e_2caa_44dd_8e80_7921de78311a.slice - libcontainer container kubepods-besteffort-pod39fd061e_2caa_44dd_8e80_7921de78311a.slice. May 27 03:17:14.308642 systemd[1]: Created slice kubepods-burstable-pod70b90810_5010_4e79_a12b_6628d0a8bf38.slice - libcontainer container kubepods-burstable-pod70b90810_5010_4e79_a12b_6628d0a8bf38.slice. May 27 03:17:14.316916 systemd[1]: Created slice kubepods-besteffort-pod25f7f085_7b43_4984_9a62_d6377e95fb7b.slice - libcontainer container kubepods-besteffort-pod25f7f085_7b43_4984_9a62_d6377e95fb7b.slice. May 27 03:17:14.324300 systemd[1]: Created slice kubepods-besteffort-podbc63af11_46c2_437f_be08_defcd96e797a.slice - libcontainer container kubepods-besteffort-podbc63af11_46c2_437f_be08_defcd96e797a.slice. May 27 03:17:14.373201 kubelet[2697]: I0527 03:17:14.373144 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52rs4\" (UniqueName: \"kubernetes.io/projected/39fd061e-2caa-44dd-8e80-7921de78311a-kube-api-access-52rs4\") pod \"calico-apiserver-6d77d9d7cb-x6phb\" (UID: \"39fd061e-2caa-44dd-8e80-7921de78311a\") " pod="calico-apiserver/calico-apiserver-6d77d9d7cb-x6phb" May 27 03:17:14.373201 kubelet[2697]: I0527 03:17:14.373209 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48tt\" (UniqueName: \"kubernetes.io/projected/25f7f085-7b43-4984-9a62-d6377e95fb7b-kube-api-access-z48tt\") pod \"goldmane-78d55f7ddc-5ffpl\" (UID: \"25f7f085-7b43-4984-9a62-d6377e95fb7b\") " pod="calico-system/goldmane-78d55f7ddc-5ffpl" May 27 03:17:14.373398 kubelet[2697]: I0527 03:17:14.373230 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc63af11-46c2-437f-be08-defcd96e797a-calico-apiserver-certs\") pod \"calico-apiserver-6d77d9d7cb-xz6g4\" (UID: \"bc63af11-46c2-437f-be08-defcd96e797a\") " pod="calico-apiserver/calico-apiserver-6d77d9d7cb-xz6g4" May 27 03:17:14.373398 kubelet[2697]: I0527 03:17:14.373258 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-785zf\" (UniqueName: \"kubernetes.io/projected/70b90810-5010-4e79-a12b-6628d0a8bf38-kube-api-access-785zf\") pod \"coredns-668d6bf9bc-2fprc\" (UID: \"70b90810-5010-4e79-a12b-6628d0a8bf38\") " pod="kube-system/coredns-668d6bf9bc-2fprc" May 27 03:17:14.373398 kubelet[2697]: I0527 03:17:14.373286 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f29b750-f6dd-477a-b679-fd6d52c678b0-tigera-ca-bundle\") pod \"calico-kube-controllers-7f7d746588-2wvg2\" (UID: \"1f29b750-f6dd-477a-b679-fd6d52c678b0\") " pod="calico-system/calico-kube-controllers-7f7d746588-2wvg2" May 27 03:17:14.373398 kubelet[2697]: I0527 03:17:14.373316 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-backend-key-pair\") pod \"whisker-6dbcb77798-ph8ld\" (UID: \"d889009d-a1c4-4536-b30f-9af18f8da291\") " pod="calico-system/whisker-6dbcb77798-ph8ld" May 27 03:17:14.373398 kubelet[2697]: I0527 03:17:14.373336 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/25f7f085-7b43-4984-9a62-d6377e95fb7b-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-5ffpl\" (UID: \"25f7f085-7b43-4984-9a62-d6377e95fb7b\") " pod="calico-system/goldmane-78d55f7ddc-5ffpl" May 27 03:17:14.373517 kubelet[2697]: I0527 03:17:14.373355 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpzcx\" (UniqueName: \"kubernetes.io/projected/d889009d-a1c4-4536-b30f-9af18f8da291-kube-api-access-wpzcx\") pod \"whisker-6dbcb77798-ph8ld\" (UID: \"d889009d-a1c4-4536-b30f-9af18f8da291\") " pod="calico-system/whisker-6dbcb77798-ph8ld" May 27 03:17:14.373517 kubelet[2697]: I0527 03:17:14.373375 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f7f085-7b43-4984-9a62-d6377e95fb7b-config\") pod \"goldmane-78d55f7ddc-5ffpl\" (UID: \"25f7f085-7b43-4984-9a62-d6377e95fb7b\") " pod="calico-system/goldmane-78d55f7ddc-5ffpl" May 27 03:17:14.373517 kubelet[2697]: I0527 03:17:14.373398 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cq9j\" (UniqueName: \"kubernetes.io/projected/bc63af11-46c2-437f-be08-defcd96e797a-kube-api-access-2cq9j\") pod \"calico-apiserver-6d77d9d7cb-xz6g4\" (UID: \"bc63af11-46c2-437f-be08-defcd96e797a\") " pod="calico-apiserver/calico-apiserver-6d77d9d7cb-xz6g4" May 27 03:17:14.373517 kubelet[2697]: I0527 03:17:14.373479 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25f7f085-7b43-4984-9a62-d6377e95fb7b-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-5ffpl\" (UID: \"25f7f085-7b43-4984-9a62-d6377e95fb7b\") " pod="calico-system/goldmane-78d55f7ddc-5ffpl" May 27 03:17:14.373517 kubelet[2697]: I0527 03:17:14.373511 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-ca-bundle\") pod \"whisker-6dbcb77798-ph8ld\" (UID: \"d889009d-a1c4-4536-b30f-9af18f8da291\") " pod="calico-system/whisker-6dbcb77798-ph8ld" May 27 03:17:14.373640 kubelet[2697]: I0527 03:17:14.373535 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qkm\" (UniqueName: \"kubernetes.io/projected/c880cb5f-e0ce-42d6-a0d4-a4c7c967a072-kube-api-access-q9qkm\") pod \"coredns-668d6bf9bc-gsjbd\" (UID: \"c880cb5f-e0ce-42d6-a0d4-a4c7c967a072\") " pod="kube-system/coredns-668d6bf9bc-gsjbd" May 27 03:17:14.373640 kubelet[2697]: I0527 03:17:14.373553 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b90810-5010-4e79-a12b-6628d0a8bf38-config-volume\") pod \"coredns-668d6bf9bc-2fprc\" (UID: \"70b90810-5010-4e79-a12b-6628d0a8bf38\") " pod="kube-system/coredns-668d6bf9bc-2fprc" May 27 03:17:14.373640 kubelet[2697]: I0527 03:17:14.373591 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gvqr\" (UniqueName: \"kubernetes.io/projected/1f29b750-f6dd-477a-b679-fd6d52c678b0-kube-api-access-4gvqr\") pod \"calico-kube-controllers-7f7d746588-2wvg2\" (UID: \"1f29b750-f6dd-477a-b679-fd6d52c678b0\") " pod="calico-system/calico-kube-controllers-7f7d746588-2wvg2" May 27 03:17:14.373640 kubelet[2697]: I0527 03:17:14.373614 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c880cb5f-e0ce-42d6-a0d4-a4c7c967a072-config-volume\") pod \"coredns-668d6bf9bc-gsjbd\" (UID: \"c880cb5f-e0ce-42d6-a0d4-a4c7c967a072\") " pod="kube-system/coredns-668d6bf9bc-gsjbd" May 27 03:17:14.373640 kubelet[2697]: I0527 03:17:14.373635 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/39fd061e-2caa-44dd-8e80-7921de78311a-calico-apiserver-certs\") pod \"calico-apiserver-6d77d9d7cb-x6phb\" (UID: \"39fd061e-2caa-44dd-8e80-7921de78311a\") " pod="calico-apiserver/calico-apiserver-6d77d9d7cb-x6phb" May 27 03:17:14.426327 systemd[1]: Created slice kubepods-besteffort-pod55c9e866_18db_4ca8_a823_aa7f4c344902.slice - libcontainer container kubepods-besteffort-pod55c9e866_18db_4ca8_a823_aa7f4c344902.slice. May 27 03:17:14.429126 containerd[1563]: time="2025-05-27T03:17:14.429070338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdwt9,Uid:55c9e866-18db-4ca8-a823-aa7f4c344902,Namespace:calico-system,Attempt:0,}" May 27 03:17:14.552413 containerd[1563]: time="2025-05-27T03:17:14.552359546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 03:17:14.576630 containerd[1563]: time="2025-05-27T03:17:14.576557031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d746588-2wvg2,Uid:1f29b750-f6dd-477a-b679-fd6d52c678b0,Namespace:calico-system,Attempt:0,}" May 27 03:17:14.583068 containerd[1563]: time="2025-05-27T03:17:14.583013409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbcb77798-ph8ld,Uid:d889009d-a1c4-4536-b30f-9af18f8da291,Namespace:calico-system,Attempt:0,}" May 27 03:17:14.593731 kubelet[2697]: E0527 03:17:14.593403 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:14.597335 containerd[1563]: time="2025-05-27T03:17:14.597029558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsjbd,Uid:c880cb5f-e0ce-42d6-a0d4-a4c7c967a072,Namespace:kube-system,Attempt:0,}" May 27 03:17:14.612295 kubelet[2697]: E0527 03:17:14.612235 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:14.613341 containerd[1563]: time="2025-05-27T03:17:14.613289292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fprc,Uid:70b90810-5010-4e79-a12b-6628d0a8bf38,Namespace:kube-system,Attempt:0,}" May 27 03:17:14.623372 containerd[1563]: time="2025-05-27T03:17:14.623234885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5ffpl,Uid:25f7f085-7b43-4984-9a62-d6377e95fb7b,Namespace:calico-system,Attempt:0,}" May 27 03:17:14.662589 containerd[1563]: time="2025-05-27T03:17:14.662190453Z" level=error msg="Failed to destroy network for sandbox \"c52321aed63c6950f8e7e56f112dc17d7f0767387dfc1415eafb694643b18467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.691708 containerd[1563]: time="2025-05-27T03:17:14.691638792Z" level=error msg="Failed to destroy network for sandbox \"268f9d60962c9aa8fe4da0ab5819f61645c6425d3087f092ad1ed148f3871d11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.716937 containerd[1563]: time="2025-05-27T03:17:14.716882813Z" level=error msg="Failed to destroy network for sandbox \"e1b24362a2ce5b96ee315fb380afcdf3d46e34ffaa9c9a56580d1a8e8616d939\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.723462 containerd[1563]: time="2025-05-27T03:17:14.723401688Z" level=error msg="Failed to destroy network for sandbox \"90cbb3fc4daf567d2b16caeeee887584e67e4f47ecc9531168c2f27d8579a034\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.729112 containerd[1563]: time="2025-05-27T03:17:14.728970730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dbcb77798-ph8ld,Uid:d889009d-a1c4-4536-b30f-9af18f8da291,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cbb3fc4daf567d2b16caeeee887584e67e4f47ecc9531168c2f27d8579a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.729330 containerd[1563]: time="2025-05-27T03:17:14.729137934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsjbd,Uid:c880cb5f-e0ce-42d6-a0d4-a4c7c967a072,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b24362a2ce5b96ee315fb380afcdf3d46e34ffaa9c9a56580d1a8e8616d939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.729583 containerd[1563]: time="2025-05-27T03:17:14.728986470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdwt9,Uid:55c9e866-18db-4ca8-a823-aa7f4c344902,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52321aed63c6950f8e7e56f112dc17d7f0767387dfc1415eafb694643b18467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.729583 containerd[1563]: time="2025-05-27T03:17:14.729036654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d746588-2wvg2,Uid:1f29b750-f6dd-477a-b679-fd6d52c678b0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"268f9d60962c9aa8fe4da0ab5819f61645c6425d3087f092ad1ed148f3871d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.748608 containerd[1563]: time="2025-05-27T03:17:14.748540561Z" level=error msg="Failed to destroy network for sandbox \"b0f62d6f569fecdc61076e8931bbbdf34ad701f73e43cd8f51b3a53a78d3d98f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.748764 containerd[1563]: time="2025-05-27T03:17:14.748727533Z" level=error msg="Failed to destroy network for sandbox \"e3ccfcd890bc0694f79a3fc9a224cde1aaab0947c4fdea0217e2880cdf35ed83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.749248 kubelet[2697]: E0527 03:17:14.749046 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cbb3fc4daf567d2b16caeeee887584e67e4f47ecc9531168c2f27d8579a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.749361 kubelet[2697]: E0527 03:17:14.749278 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268f9d60962c9aa8fe4da0ab5819f61645c6425d3087f092ad1ed148f3871d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.749524 kubelet[2697]: E0527 03:17:14.749388 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cbb3fc4daf567d2b16caeeee887584e67e4f47ecc9531168c2f27d8579a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbcb77798-ph8ld" May 27 03:17:14.749524 kubelet[2697]: E0527 03:17:14.749402 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268f9d60962c9aa8fe4da0ab5819f61645c6425d3087f092ad1ed148f3871d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7d746588-2wvg2" May 27 03:17:14.749524 kubelet[2697]: E0527 03:17:14.749434 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268f9d60962c9aa8fe4da0ab5819f61645c6425d3087f092ad1ed148f3871d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7d746588-2wvg2" May 27 03:17:14.749524 kubelet[2697]: E0527 03:17:14.749413 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cbb3fc4daf567d2b16caeeee887584e67e4f47ecc9531168c2f27d8579a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dbcb77798-ph8ld" May 27 03:17:14.749640 kubelet[2697]: E0527 03:17:14.749184 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b24362a2ce5b96ee315fb380afcdf3d46e34ffaa9c9a56580d1a8e8616d939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.749640 kubelet[2697]: E0527 03:17:14.749486 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b24362a2ce5b96ee315fb380afcdf3d46e34ffaa9c9a56580d1a8e8616d939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gsjbd" May 27 03:17:14.749640 kubelet[2697]: E0527 03:17:14.749501 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b24362a2ce5b96ee315fb380afcdf3d46e34ffaa9c9a56580d1a8e8616d939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gsjbd" May 27 03:17:14.749721 kubelet[2697]: E0527 03:17:14.749504 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f7d746588-2wvg2_calico-system(1f29b750-f6dd-477a-b679-fd6d52c678b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f7d746588-2wvg2_calico-system(1f29b750-f6dd-477a-b679-fd6d52c678b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"268f9d60962c9aa8fe4da0ab5819f61645c6425d3087f092ad1ed148f3871d11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f7d746588-2wvg2" podUID="1f29b750-f6dd-477a-b679-fd6d52c678b0" May 27 03:17:14.749721 kubelet[2697]: E0527 03:17:14.749528 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gsjbd_kube-system(c880cb5f-e0ce-42d6-a0d4-a4c7c967a072)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gsjbd_kube-system(c880cb5f-e0ce-42d6-a0d4-a4c7c967a072)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1b24362a2ce5b96ee315fb380afcdf3d46e34ffaa9c9a56580d1a8e8616d939\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gsjbd" podUID="c880cb5f-e0ce-42d6-a0d4-a4c7c967a072" May 27 03:17:14.749721 kubelet[2697]: E0527 03:17:14.749210 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52321aed63c6950f8e7e56f112dc17d7f0767387dfc1415eafb694643b18467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.749886 kubelet[2697]: E0527 03:17:14.749559 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dbcb77798-ph8ld_calico-system(d889009d-a1c4-4536-b30f-9af18f8da291)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dbcb77798-ph8ld_calico-system(d889009d-a1c4-4536-b30f-9af18f8da291)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90cbb3fc4daf567d2b16caeeee887584e67e4f47ecc9531168c2f27d8579a034\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dbcb77798-ph8ld" podUID="d889009d-a1c4-4536-b30f-9af18f8da291" May 27 03:17:14.749886 kubelet[2697]: E0527 03:17:14.749570 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52321aed63c6950f8e7e56f112dc17d7f0767387dfc1415eafb694643b18467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qdwt9" May 27 03:17:14.749886 kubelet[2697]: E0527 03:17:14.749590 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52321aed63c6950f8e7e56f112dc17d7f0767387dfc1415eafb694643b18467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qdwt9" May 27 03:17:14.750026 kubelet[2697]: E0527 03:17:14.749625 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qdwt9_calico-system(55c9e866-18db-4ca8-a823-aa7f4c344902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qdwt9_calico-system(55c9e866-18db-4ca8-a823-aa7f4c344902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c52321aed63c6950f8e7e56f112dc17d7f0767387dfc1415eafb694643b18467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qdwt9" podUID="55c9e866-18db-4ca8-a823-aa7f4c344902" May 27 03:17:14.751489 containerd[1563]: time="2025-05-27T03:17:14.751382240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5ffpl,Uid:25f7f085-7b43-4984-9a62-d6377e95fb7b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f62d6f569fecdc61076e8931bbbdf34ad701f73e43cd8f51b3a53a78d3d98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.751766 kubelet[2697]: E0527 03:17:14.751709 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f62d6f569fecdc61076e8931bbbdf34ad701f73e43cd8f51b3a53a78d3d98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.751841 kubelet[2697]: E0527 03:17:14.751787 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f62d6f569fecdc61076e8931bbbdf34ad701f73e43cd8f51b3a53a78d3d98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-5ffpl" May 27 03:17:14.751841 kubelet[2697]: E0527 03:17:14.751815 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f62d6f569fecdc61076e8931bbbdf34ad701f73e43cd8f51b3a53a78d3d98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-5ffpl" May 27 03:17:14.751918 kubelet[2697]: E0527 03:17:14.751875 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-5ffpl_calico-system(25f7f085-7b43-4984-9a62-d6377e95fb7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-5ffpl_calico-system(25f7f085-7b43-4984-9a62-d6377e95fb7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0f62d6f569fecdc61076e8931bbbdf34ad701f73e43cd8f51b3a53a78d3d98f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-5ffpl" podUID="25f7f085-7b43-4984-9a62-d6377e95fb7b" May 27 03:17:14.754218 containerd[1563]: time="2025-05-27T03:17:14.754150690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fprc,Uid:70b90810-5010-4e79-a12b-6628d0a8bf38,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ccfcd890bc0694f79a3fc9a224cde1aaab0947c4fdea0217e2880cdf35ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.754708 kubelet[2697]: E0527 03:17:14.754661 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ccfcd890bc0694f79a3fc9a224cde1aaab0947c4fdea0217e2880cdf35ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:14.754808 kubelet[2697]: E0527 03:17:14.754731 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ccfcd890bc0694f79a3fc9a224cde1aaab0947c4fdea0217e2880cdf35ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2fprc" May 27 03:17:14.754808 kubelet[2697]: E0527 03:17:14.754753 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ccfcd890bc0694f79a3fc9a224cde1aaab0947c4fdea0217e2880cdf35ed83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2fprc" May 27 03:17:14.754870 kubelet[2697]: E0527 03:17:14.754813 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2fprc_kube-system(70b90810-5010-4e79-a12b-6628d0a8bf38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2fprc_kube-system(70b90810-5010-4e79-a12b-6628d0a8bf38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3ccfcd890bc0694f79a3fc9a224cde1aaab0947c4fdea0217e2880cdf35ed83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2fprc" podUID="70b90810-5010-4e79-a12b-6628d0a8bf38" May 27 03:17:15.005754 systemd[1]: run-netns-cni\x2d87802b17\x2d5c52\x2de6d7\x2d73b7\x2df7df1834e235.mount: Deactivated successfully. May 27 03:17:15.521271 kubelet[2697]: E0527 03:17:15.521198 2697 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 27 03:17:15.521271 kubelet[2697]: E0527 03:17:15.521256 2697 projected.go:194] Error preparing data for projected volume kube-api-access-2cq9j for pod calico-apiserver/calico-apiserver-6d77d9d7cb-xz6g4: failed to sync configmap cache: timed out waiting for the condition May 27 03:17:15.521770 kubelet[2697]: E0527 03:17:15.521341 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc63af11-46c2-437f-be08-defcd96e797a-kube-api-access-2cq9j podName:bc63af11-46c2-437f-be08-defcd96e797a nodeName:}" failed. No retries permitted until 2025-05-27 03:17:16.02131668 +0000 UTC m=+34.704078687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cq9j" (UniqueName: "kubernetes.io/projected/bc63af11-46c2-437f-be08-defcd96e797a-kube-api-access-2cq9j") pod "calico-apiserver-6d77d9d7cb-xz6g4" (UID: "bc63af11-46c2-437f-be08-defcd96e797a") : failed to sync configmap cache: timed out waiting for the condition May 27 03:17:15.524422 kubelet[2697]: E0527 03:17:15.524381 2697 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 27 03:17:15.524422 kubelet[2697]: E0527 03:17:15.524417 2697 projected.go:194] Error preparing data for projected volume kube-api-access-52rs4 for pod calico-apiserver/calico-apiserver-6d77d9d7cb-x6phb: failed to sync configmap cache: timed out waiting for the condition May 27 03:17:15.524529 kubelet[2697]: E0527 03:17:15.524472 2697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39fd061e-2caa-44dd-8e80-7921de78311a-kube-api-access-52rs4 podName:39fd061e-2caa-44dd-8e80-7921de78311a nodeName:}" failed. No retries permitted until 2025-05-27 03:17:16.024455536 +0000 UTC m=+34.707217543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-52rs4" (UniqueName: "kubernetes.io/projected/39fd061e-2caa-44dd-8e80-7921de78311a-kube-api-access-52rs4") pod "calico-apiserver-6d77d9d7cb-x6phb" (UID: "39fd061e-2caa-44dd-8e80-7921de78311a") : failed to sync configmap cache: timed out waiting for the condition May 27 03:17:16.104657 containerd[1563]: time="2025-05-27T03:17:16.104584725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-x6phb,Uid:39fd061e-2caa-44dd-8e80-7921de78311a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:17:16.128665 containerd[1563]: time="2025-05-27T03:17:16.128611721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-xz6g4,Uid:bc63af11-46c2-437f-be08-defcd96e797a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:17:16.241924 containerd[1563]: time="2025-05-27T03:17:16.241863711Z" level=error msg="Failed to destroy network for sandbox \"fd3832bc3d9c010cd4eb61a9cf89204d3f104ce60f013986f64e3530d3166cf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:16.244982 systemd[1]: run-netns-cni\x2dd073e9fd\x2d4faf\x2d4938\x2d415a\x2d006f70ae6c0a.mount: Deactivated successfully. May 27 03:17:16.247774 containerd[1563]: time="2025-05-27T03:17:16.247734286Z" level=error msg="Failed to destroy network for sandbox \"0fc70f4872fe56accfdcade431fbc0c7a0905d6d1443e4e378441d9871d7b74b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:16.293620 containerd[1563]: time="2025-05-27T03:17:16.293528153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-xz6g4,Uid:bc63af11-46c2-437f-be08-defcd96e797a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd3832bc3d9c010cd4eb61a9cf89204d3f104ce60f013986f64e3530d3166cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:16.293893 kubelet[2697]: E0527 03:17:16.293835 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd3832bc3d9c010cd4eb61a9cf89204d3f104ce60f013986f64e3530d3166cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:16.293948 kubelet[2697]: E0527 03:17:16.293910 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd3832bc3d9c010cd4eb61a9cf89204d3f104ce60f013986f64e3530d3166cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-xz6g4" May 27 03:17:16.293948 kubelet[2697]: E0527 03:17:16.293936 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd3832bc3d9c010cd4eb61a9cf89204d3f104ce60f013986f64e3530d3166cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-xz6g4" May 27 03:17:16.294049 kubelet[2697]: E0527 03:17:16.294002 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d77d9d7cb-xz6g4_calico-apiserver(bc63af11-46c2-437f-be08-defcd96e797a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d77d9d7cb-xz6g4_calico-apiserver(bc63af11-46c2-437f-be08-defcd96e797a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd3832bc3d9c010cd4eb61a9cf89204d3f104ce60f013986f64e3530d3166cf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-xz6g4" podUID="bc63af11-46c2-437f-be08-defcd96e797a" May 27 03:17:16.341209 containerd[1563]: time="2025-05-27T03:17:16.341141096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-x6phb,Uid:39fd061e-2caa-44dd-8e80-7921de78311a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc70f4872fe56accfdcade431fbc0c7a0905d6d1443e4e378441d9871d7b74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:16.341460 kubelet[2697]: E0527 03:17:16.341409 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc70f4872fe56accfdcade431fbc0c7a0905d6d1443e4e378441d9871d7b74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:16.341531 kubelet[2697]: E0527 03:17:16.341476 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc70f4872fe56accfdcade431fbc0c7a0905d6d1443e4e378441d9871d7b74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-x6phb" May 27 03:17:16.341531 kubelet[2697]: E0527 03:17:16.341496 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fc70f4872fe56accfdcade431fbc0c7a0905d6d1443e4e378441d9871d7b74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-x6phb" May 27 03:17:16.341605 kubelet[2697]: E0527 03:17:16.341541 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d77d9d7cb-x6phb_calico-apiserver(39fd061e-2caa-44dd-8e80-7921de78311a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d77d9d7cb-x6phb_calico-apiserver(39fd061e-2caa-44dd-8e80-7921de78311a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fc70f4872fe56accfdcade431fbc0c7a0905d6d1443e4e378441d9871d7b74b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-x6phb" podUID="39fd061e-2caa-44dd-8e80-7921de78311a" May 27 03:17:17.088540 systemd[1]: run-netns-cni\x2dbe1745d6\x2d1ba3\x2d40e2\x2dfb3a\x2d3b59c3471ecc.mount: Deactivated successfully. May 27 03:17:22.873469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788042616.mount: Deactivated successfully. May 27 03:17:24.905472 kubelet[2697]: E0527 03:17:24.903833 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:24.905939 containerd[1563]: time="2025-05-27T03:17:24.905285439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fprc,Uid:70b90810-5010-4e79-a12b-6628d0a8bf38,Namespace:kube-system,Attempt:0,}" May 27 03:17:25.192174 containerd[1563]: time="2025-05-27T03:17:25.192025773Z" level=error msg="Failed to destroy network for sandbox \"40111c2451bb8da4153a3ed59b66a0cdd2ba4fe797477dcb903f63b6aa5b6405\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:25.194834 systemd[1]: run-netns-cni\x2d0bc7944d\x2d4f6a\x2de67e\x2de705\x2d7684cd3cff1c.mount: Deactivated successfully. May 27 03:17:25.215787 containerd[1563]: time="2025-05-27T03:17:25.215706579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:25.277485 containerd[1563]: time="2025-05-27T03:17:25.277384733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fprc,Uid:70b90810-5010-4e79-a12b-6628d0a8bf38,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40111c2451bb8da4153a3ed59b66a0cdd2ba4fe797477dcb903f63b6aa5b6405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:25.278353 kubelet[2697]: E0527 03:17:25.277957 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40111c2451bb8da4153a3ed59b66a0cdd2ba4fe797477dcb903f63b6aa5b6405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:25.278353 kubelet[2697]: E0527 03:17:25.278045 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40111c2451bb8da4153a3ed59b66a0cdd2ba4fe797477dcb903f63b6aa5b6405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2fprc" May 27 03:17:25.278353 kubelet[2697]: E0527 03:17:25.278074 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40111c2451bb8da4153a3ed59b66a0cdd2ba4fe797477dcb903f63b6aa5b6405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2fprc" May 27 03:17:25.278605 kubelet[2697]: E0527 03:17:25.278176 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2fprc_kube-system(70b90810-5010-4e79-a12b-6628d0a8bf38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2fprc_kube-system(70b90810-5010-4e79-a12b-6628d0a8bf38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40111c2451bb8da4153a3ed59b66a0cdd2ba4fe797477dcb903f63b6aa5b6405\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2fprc" podUID="70b90810-5010-4e79-a12b-6628d0a8bf38" May 27 03:17:25.360267 containerd[1563]: time="2025-05-27T03:17:25.338730533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 27 03:17:25.387527 containerd[1563]: time="2025-05-27T03:17:25.387440757Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:25.419303 containerd[1563]: time="2025-05-27T03:17:25.419238739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:25.420273 containerd[1563]: time="2025-05-27T03:17:25.420110906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 10.867699422s" May 27 03:17:25.420273 containerd[1563]: time="2025-05-27T03:17:25.420158926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 27 03:17:25.439447 containerd[1563]: time="2025-05-27T03:17:25.439381014Z" level=info msg="CreateContainer within sandbox \"c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 03:17:25.637583 containerd[1563]: time="2025-05-27T03:17:25.637506125Z" level=info msg="Container f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:26.057070 containerd[1563]: time="2025-05-27T03:17:26.056997826Z" level=info msg="CreateContainer within sandbox \"c08fe6bdf2058392c6765d8da2e407af67b4ed2b35aec76c281dcd1de9f63df8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa\"" May 27 03:17:26.057677 containerd[1563]: time="2025-05-27T03:17:26.057639559Z" level=info msg="StartContainer for \"f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa\"" May 27 03:17:26.059545 containerd[1563]: time="2025-05-27T03:17:26.059501985Z" level=info msg="connecting to shim f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa" address="unix:///run/containerd/s/670e1142f3f13d8473e7476b7d0658db0614a200c9c3a90eefaa5c95a689390f" protocol=ttrpc version=3 May 27 03:17:26.133306 systemd[1]: Started cri-containerd-f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa.scope - libcontainer container f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa. May 27 03:17:26.710304 kubelet[2697]: E0527 03:17:26.710250 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:26.710995 containerd[1563]: time="2025-05-27T03:17:26.710805805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsjbd,Uid:c880cb5f-e0ce-42d6-a0d4-a4c7c967a072,Namespace:kube-system,Attempt:0,}" May 27 03:17:26.983194 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 03:17:26.984309 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 03:17:26.987272 containerd[1563]: time="2025-05-27T03:17:26.987059971Z" level=info msg="StartContainer for \"f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa\" returns successfully" May 27 03:17:27.121815 containerd[1563]: time="2025-05-27T03:17:27.121758894Z" level=error msg="Failed to destroy network for sandbox \"4d7fd5a3b13f75d3745f16abe14cff73c143bfc3a5b8c3c028cbf0592a17f88c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:27.124453 systemd[1]: run-netns-cni\x2d199a09b9\x2dd05a\x2dc159\x2d803a\x2dd77d4c83aafc.mount: Deactivated successfully. May 27 03:17:27.215311 containerd[1563]: time="2025-05-27T03:17:27.215224131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsjbd,Uid:c880cb5f-e0ce-42d6-a0d4-a4c7c967a072,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7fd5a3b13f75d3745f16abe14cff73c143bfc3a5b8c3c028cbf0592a17f88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:27.215595 kubelet[2697]: E0527 03:17:27.215534 2697 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7fd5a3b13f75d3745f16abe14cff73c143bfc3a5b8c3c028cbf0592a17f88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 03:17:27.215652 kubelet[2697]: E0527 03:17:27.215616 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7fd5a3b13f75d3745f16abe14cff73c143bfc3a5b8c3c028cbf0592a17f88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gsjbd" May 27 03:17:27.215652 kubelet[2697]: E0527 03:17:27.215643 2697 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d7fd5a3b13f75d3745f16abe14cff73c143bfc3a5b8c3c028cbf0592a17f88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gsjbd" May 27 03:17:27.215737 kubelet[2697]: E0527 03:17:27.215695 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gsjbd_kube-system(c880cb5f-e0ce-42d6-a0d4-a4c7c967a072)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gsjbd_kube-system(c880cb5f-e0ce-42d6-a0d4-a4c7c967a072)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d7fd5a3b13f75d3745f16abe14cff73c143bfc3a5b8c3c028cbf0592a17f88c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gsjbd" podUID="c880cb5f-e0ce-42d6-a0d4-a4c7c967a072" May 27 03:17:27.515242 kubelet[2697]: I0527 03:17:27.515192 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:17:27.515740 kubelet[2697]: E0527 03:17:27.515701 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:27.749286 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:35902.service - OpenSSH per-connection server daemon (10.0.0.1:35902). May 27 03:17:27.832196 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 35902 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:27.834570 sshd-session[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:27.842958 systemd-logind[1547]: New session 8 of user core. May 27 03:17:27.851275 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:17:27.995246 kubelet[2697]: E0527 03:17:27.994774 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:28.079114 sshd[3907]: Connection closed by 10.0.0.1 port 35902 May 27 03:17:28.079350 sshd-session[3905]: pam_unix(sshd:session): session closed for user core May 27 03:17:28.086447 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. May 27 03:17:28.091409 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:35902.service: Deactivated successfully. May 27 03:17:28.099632 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:17:28.105916 systemd-logind[1547]: Removed session 8. May 27 03:17:28.383999 kubelet[2697]: I0527 03:17:28.383275 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p4kxw" podStartSLOduration=3.985052007 podStartE2EDuration="28.383247545s" podCreationTimestamp="2025-05-27 03:17:00 +0000 UTC" firstStartedPulling="2025-05-27 03:17:01.027247366 +0000 UTC m=+19.710009373" lastFinishedPulling="2025-05-27 03:17:25.425442904 +0000 UTC m=+44.108204911" observedRunningTime="2025-05-27 03:17:28.044716679 +0000 UTC m=+46.727478706" watchObservedRunningTime="2025-05-27 03:17:28.383247545 +0000 UTC m=+47.066009552" May 27 03:17:28.421153 containerd[1563]: time="2025-05-27T03:17:28.421052833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-x6phb,Uid:39fd061e-2caa-44dd-8e80-7921de78311a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:17:28.421543 containerd[1563]: time="2025-05-27T03:17:28.421052933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d746588-2wvg2,Uid:1f29b750-f6dd-477a-b679-fd6d52c678b0,Namespace:calico-system,Attempt:0,}" May 27 03:17:28.527671 kubelet[2697]: I0527 03:17:28.527614 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-ca-bundle\") pod \"d889009d-a1c4-4536-b30f-9af18f8da291\" (UID: \"d889009d-a1c4-4536-b30f-9af18f8da291\") " May 27 03:17:28.527671 kubelet[2697]: I0527 03:17:28.527662 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpzcx\" (UniqueName: \"kubernetes.io/projected/d889009d-a1c4-4536-b30f-9af18f8da291-kube-api-access-wpzcx\") pod \"d889009d-a1c4-4536-b30f-9af18f8da291\" (UID: \"d889009d-a1c4-4536-b30f-9af18f8da291\") " May 27 03:17:28.527880 kubelet[2697]: I0527 03:17:28.527710 2697 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-backend-key-pair\") pod \"d889009d-a1c4-4536-b30f-9af18f8da291\" (UID: \"d889009d-a1c4-4536-b30f-9af18f8da291\") " May 27 03:17:28.528275 kubelet[2697]: I0527 03:17:28.528215 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d889009d-a1c4-4536-b30f-9af18f8da291" (UID: "d889009d-a1c4-4536-b30f-9af18f8da291"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:17:28.532291 kubelet[2697]: I0527 03:17:28.532248 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d889009d-a1c4-4536-b30f-9af18f8da291-kube-api-access-wpzcx" (OuterVolumeSpecName: "kube-api-access-wpzcx") pod "d889009d-a1c4-4536-b30f-9af18f8da291" (UID: "d889009d-a1c4-4536-b30f-9af18f8da291"). InnerVolumeSpecName "kube-api-access-wpzcx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:17:28.533223 kubelet[2697]: I0527 03:17:28.533191 2697 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d889009d-a1c4-4536-b30f-9af18f8da291" (UID: "d889009d-a1c4-4536-b30f-9af18f8da291"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:17:28.533587 systemd[1]: var-lib-kubelet-pods-d889009d\x2da1c4\x2d4536\x2db30f\x2d9af18f8da291-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpzcx.mount: Deactivated successfully. May 27 03:17:28.533734 systemd[1]: var-lib-kubelet-pods-d889009d\x2da1c4\x2d4536\x2db30f\x2d9af18f8da291-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 03:17:28.628292 kubelet[2697]: I0527 03:17:28.628230 2697 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 03:17:28.628292 kubelet[2697]: I0527 03:17:28.628272 2697 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d889009d-a1c4-4536-b30f-9af18f8da291-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 03:17:28.628292 kubelet[2697]: I0527 03:17:28.628281 2697 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wpzcx\" (UniqueName: \"kubernetes.io/projected/d889009d-a1c4-4536-b30f-9af18f8da291-kube-api-access-wpzcx\") on node \"localhost\" DevicePath \"\"" May 27 03:17:29.014954 systemd[1]: Removed slice kubepods-besteffort-podd889009d_a1c4_4536_b30f_9af18f8da291.slice - libcontainer container kubepods-besteffort-podd889009d_a1c4_4536_b30f_9af18f8da291.slice. May 27 03:17:29.332000 kubelet[2697]: I0527 03:17:29.331864 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/376188e3-cf9d-407d-89a7-68b60ceb1222-whisker-ca-bundle\") pod \"whisker-74749cccf6-qp8mf\" (UID: \"376188e3-cf9d-407d-89a7-68b60ceb1222\") " pod="calico-system/whisker-74749cccf6-qp8mf" May 27 03:17:29.332000 kubelet[2697]: I0527 03:17:29.331911 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/376188e3-cf9d-407d-89a7-68b60ceb1222-whisker-backend-key-pair\") pod \"whisker-74749cccf6-qp8mf\" (UID: \"376188e3-cf9d-407d-89a7-68b60ceb1222\") " pod="calico-system/whisker-74749cccf6-qp8mf" May 27 03:17:29.332226 systemd[1]: Created slice kubepods-besteffort-pod376188e3_cf9d_407d_89a7_68b60ceb1222.slice - libcontainer container kubepods-besteffort-pod376188e3_cf9d_407d_89a7_68b60ceb1222.slice. May 27 03:17:29.421889 containerd[1563]: time="2025-05-27T03:17:29.421831711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdwt9,Uid:55c9e866-18db-4ca8-a823-aa7f4c344902,Namespace:calico-system,Attempt:0,}" May 27 03:17:29.422578 containerd[1563]: time="2025-05-27T03:17:29.422480769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-xz6g4,Uid:bc63af11-46c2-437f-be08-defcd96e797a,Namespace:calico-apiserver,Attempt:0,}" May 27 03:17:29.424244 kubelet[2697]: I0527 03:17:29.424214 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d889009d-a1c4-4536-b30f-9af18f8da291" path="/var/lib/kubelet/pods/d889009d-a1c4-4536-b30f-9af18f8da291/volumes" May 27 03:17:29.432620 kubelet[2697]: I0527 03:17:29.432586 2697 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwwg\" (UniqueName: \"kubernetes.io/projected/376188e3-cf9d-407d-89a7-68b60ceb1222-kube-api-access-pfwwg\") pod \"whisker-74749cccf6-qp8mf\" (UID: \"376188e3-cf9d-407d-89a7-68b60ceb1222\") " pod="calico-system/whisker-74749cccf6-qp8mf" May 27 03:17:29.774475 containerd[1563]: time="2025-05-27T03:17:29.774432343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74749cccf6-qp8mf,Uid:376188e3-cf9d-407d-89a7-68b60ceb1222,Namespace:calico-system,Attempt:0,}" May 27 03:17:29.779226 systemd-networkd[1481]: calic9194da7be6: Link UP May 27 03:17:29.779641 systemd-networkd[1481]: calic9194da7be6: Gained carrier May 27 03:17:29.841379 containerd[1563]: 2025-05-27 03:17:28.734 [INFO][3950] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:17:29.841379 containerd[1563]: 2025-05-27 03:17:28.912 [INFO][3950] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0 calico-kube-controllers-7f7d746588- calico-system 1f29b750-f6dd-477a-b679-fd6d52c678b0 815 0 2025-05-27 03:17:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f7d746588 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f7d746588-2wvg2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic9194da7be6 [] [] }} ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-" May 27 03:17:29.841379 containerd[1563]: 2025-05-27 03:17:28.912 [INFO][3950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" May 27 03:17:29.841379 containerd[1563]: 2025-05-27 03:17:29.068 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" HandleID="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Workload="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.069 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" HandleID="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Workload="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f7d746588-2wvg2", "timestamp":"2025-05-27 03:17:29.068404777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.069 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.069 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.069 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.103 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" host="localhost" May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.231 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.322 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.377 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.384 [INFO][3966] ipam/ipam.go 208: Affinity has not been confirmed - attempt to confirm it cidr=192.168.88.128/26 host="localhost" May 27 03:17:29.841676 containerd[1563]: 2025-05-27 03:17:29.392 [INFO][3966] ipam/ipam.go 218: Writing block to get a new revision cidr=192.168.88.128/26 host="localhost" May 27 03:17:29.841994 containerd[1563]: 2025-05-27 03:17:29.441 [INFO][3966] ipam/ipam.go 226: Attempting to confirm affinity cidr=192.168.88.128/26 host="localhost" May 27 03:17:29.841994 containerd[1563]: 2025-05-27 03:17:29.447 [ERROR][3966] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(localhost-192-168-88-128-26) Name="localhost-192-168-88-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"localhost", Type:"host", CIDR:"192.168.88.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "localhost-192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again May 27 03:17:29.841994 containerd[1563]: 2025-05-27 03:17:29.447 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:29.841994 containerd[1563]: 2025-05-27 03:17:29.572 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:29.841994 containerd[1563]: 2025-05-27 03:17:29.576 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:29.841994 containerd[1563]: 2025-05-27 03:17:29.576 [INFO][3966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" host="localhost" May 27 03:17:29.841994 containerd[1563]: 2025-05-27 03:17:29.580 [INFO][3966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205 May 27 03:17:29.842337 containerd[1563]: 2025-05-27 03:17:29.597 [INFO][3966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" host="localhost" May 27 03:17:29.842337 containerd[1563]: 2025-05-27 03:17:29.627 [INFO][3966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" host="localhost" May 27 03:17:29.842337 containerd[1563]: 2025-05-27 03:17:29.627 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" host="localhost" May 27 03:17:29.842337 containerd[1563]: 2025-05-27 03:17:29.627 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:29.842337 containerd[1563]: 2025-05-27 03:17:29.627 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" HandleID="k8s-pod-network.e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Workload="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" May 27 03:17:29.842489 containerd[1563]: 2025-05-27 03:17:29.631 [INFO][3950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0", GenerateName:"calico-kube-controllers-7f7d746588-", Namespace:"calico-system", SelfLink:"", UID:"1f29b750-f6dd-477a-b679-fd6d52c678b0", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7d746588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f7d746588-2wvg2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9194da7be6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:29.842568 containerd[1563]: 2025-05-27 03:17:29.631 [INFO][3950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" May 27 03:17:29.842568 containerd[1563]: 2025-05-27 03:17:29.631 [INFO][3950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9194da7be6 ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" May 27 03:17:29.842568 containerd[1563]: 2025-05-27 03:17:29.783 [INFO][3950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" May 27 03:17:29.842690 containerd[1563]: 2025-05-27 03:17:29.784 [INFO][3950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0", GenerateName:"calico-kube-controllers-7f7d746588-", Namespace:"calico-system", SelfLink:"", UID:"1f29b750-f6dd-477a-b679-fd6d52c678b0", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7d746588", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205", Pod:"calico-kube-controllers-7f7d746588-2wvg2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9194da7be6", MAC:"f6:40:e3:93:86:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:29.842761 containerd[1563]: 2025-05-27 03:17:29.838 [INFO][3950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" Namespace="calico-system" Pod="calico-kube-controllers-7f7d746588-2wvg2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f7d746588--2wvg2-eth0" May 27 03:17:29.982575 systemd-networkd[1481]: caliac99512faa7: Link UP May 27 03:17:29.983948 systemd-networkd[1481]: caliac99512faa7: Gained carrier May 27 03:17:30.261675 containerd[1563]: 2025-05-27 03:17:28.679 [INFO][3936] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:17:30.261675 containerd[1563]: 2025-05-27 03:17:28.985 [INFO][3936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0 calico-apiserver-6d77d9d7cb- calico-apiserver 39fd061e-2caa-44dd-8e80-7921de78311a 823 0 2025-05-27 03:16:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d77d9d7cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d77d9d7cb-x6phb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac99512faa7 [] [] }} ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-" May 27 03:17:30.261675 containerd[1563]: 2025-05-27 03:17:28.985 [INFO][3936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" May 27 03:17:30.261675 containerd[1563]: 2025-05-27 03:17:29.069 [INFO][3968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" HandleID="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Workload="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.071 [INFO][3968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" HandleID="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Workload="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000189ab0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d77d9d7cb-x6phb", "timestamp":"2025-05-27 03:17:29.068532527 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.075 [INFO][3968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.627 [INFO][3968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.627 [INFO][3968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.639 [INFO][3968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" host="localhost" May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.644 [INFO][3968] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.658 [INFO][3968] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.660 [INFO][3968] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.662 [INFO][3968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:30.261941 containerd[1563]: 2025-05-27 03:17:29.662 [INFO][3968] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" host="localhost" May 27 03:17:30.262372 containerd[1563]: 2025-05-27 03:17:29.813 [INFO][3968] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c May 27 03:17:30.262372 containerd[1563]: 2025-05-27 03:17:29.866 [INFO][3968] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" host="localhost" May 27 03:17:30.262372 containerd[1563]: 2025-05-27 03:17:29.977 [INFO][3968] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" host="localhost" May 27 03:17:30.262372 containerd[1563]: 2025-05-27 03:17:29.977 [INFO][3968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" host="localhost" May 27 03:17:30.262372 containerd[1563]: 2025-05-27 03:17:29.977 [INFO][3968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:30.262372 containerd[1563]: 2025-05-27 03:17:29.977 [INFO][3968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" HandleID="k8s-pod-network.96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Workload="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" May 27 03:17:30.262555 containerd[1563]: 2025-05-27 03:17:29.980 [INFO][3936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0", GenerateName:"calico-apiserver-6d77d9d7cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"39fd061e-2caa-44dd-8e80-7921de78311a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77d9d7cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d77d9d7cb-x6phb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac99512faa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:30.262632 containerd[1563]: 2025-05-27 03:17:29.981 [INFO][3936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" May 27 03:17:30.262632 containerd[1563]: 2025-05-27 03:17:29.981 [INFO][3936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac99512faa7 ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" May 27 03:17:30.262632 containerd[1563]: 2025-05-27 03:17:29.983 [INFO][3936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" May 27 03:17:30.262742 containerd[1563]: 2025-05-27 03:17:29.983 [INFO][3936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0", GenerateName:"calico-apiserver-6d77d9d7cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"39fd061e-2caa-44dd-8e80-7921de78311a", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77d9d7cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c", Pod:"calico-apiserver-6d77d9d7cb-x6phb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac99512faa7", MAC:"e6:6e:00:f3:a4:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:30.262816 containerd[1563]: 2025-05-27 03:17:30.257 [INFO][3936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-x6phb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--x6phb-eth0" May 27 03:17:30.425738 containerd[1563]: time="2025-05-27T03:17:30.424787766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5ffpl,Uid:25f7f085-7b43-4984-9a62-d6377e95fb7b,Namespace:calico-system,Attempt:0,}" May 27 03:17:30.576547 systemd-networkd[1481]: cali8160660b707: Link UP May 27 03:17:30.581211 systemd-networkd[1481]: cali8160660b707: Gained carrier May 27 03:17:30.642827 containerd[1563]: 2025-05-27 03:17:30.324 [INFO][3999] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:17:30.642827 containerd[1563]: 2025-05-27 03:17:30.340 [INFO][3999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qdwt9-eth0 csi-node-driver- calico-system 55c9e866-18db-4ca8-a823-aa7f4c344902 708 0 2025-05-27 03:17:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qdwt9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8160660b707 [] [] }} ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-" May 27 03:17:30.642827 containerd[1563]: 2025-05-27 03:17:30.340 [INFO][3999] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-eth0" May 27 03:17:30.642827 containerd[1563]: 2025-05-27 03:17:30.477 [INFO][4039] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" HandleID="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Workload="localhost-k8s-csi--node--driver--qdwt9-eth0" May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.477 [INFO][4039] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" HandleID="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Workload="localhost-k8s-csi--node--driver--qdwt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qdwt9", "timestamp":"2025-05-27 03:17:30.477163769 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.478 [INFO][4039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.478 [INFO][4039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.478 [INFO][4039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.493 [INFO][4039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" host="localhost" May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.506 [INFO][4039] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.517 [INFO][4039] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.520 [INFO][4039] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.524 [INFO][4039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:30.644280 containerd[1563]: 2025-05-27 03:17:30.524 [INFO][4039] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" host="localhost" May 27 03:17:30.644553 containerd[1563]: 2025-05-27 03:17:30.527 [INFO][4039] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea May 27 03:17:30.644553 containerd[1563]: 2025-05-27 03:17:30.535 [INFO][4039] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" host="localhost" May 27 03:17:30.644553 containerd[1563]: 2025-05-27 03:17:30.547 [INFO][4039] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" host="localhost" May 27 03:17:30.644553 containerd[1563]: 2025-05-27 03:17:30.547 [INFO][4039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" host="localhost" May 27 03:17:30.644553 containerd[1563]: 2025-05-27 03:17:30.547 [INFO][4039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:30.644553 containerd[1563]: 2025-05-27 03:17:30.548 [INFO][4039] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" HandleID="k8s-pod-network.1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Workload="localhost-k8s-csi--node--driver--qdwt9-eth0" May 27 03:17:30.644703 containerd[1563]: 2025-05-27 03:17:30.560 [INFO][3999] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qdwt9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55c9e866-18db-4ca8-a823-aa7f4c344902", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qdwt9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8160660b707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:30.644758 containerd[1563]: 2025-05-27 03:17:30.562 [INFO][3999] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-eth0" May 27 03:17:30.644758 containerd[1563]: 2025-05-27 03:17:30.562 [INFO][3999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8160660b707 ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-eth0" May 27 03:17:30.644758 containerd[1563]: 2025-05-27 03:17:30.580 [INFO][3999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-eth0" May 27 03:17:30.652012 containerd[1563]: 2025-05-27 03:17:30.582 [INFO][3999] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qdwt9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55c9e866-18db-4ca8-a823-aa7f4c344902", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 17, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea", Pod:"csi-node-driver-qdwt9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8160660b707", MAC:"da:5b:f3:21:e0:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:30.652464 containerd[1563]: 2025-05-27 03:17:30.634 [INFO][3999] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" Namespace="calico-system" Pod="csi-node-driver-qdwt9" WorkloadEndpoint="localhost-k8s-csi--node--driver--qdwt9-eth0" May 27 03:17:30.654161 containerd[1563]: time="2025-05-27T03:17:30.654060872Z" level=info msg="connecting to shim 96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c" address="unix:///run/containerd/s/d911e7e2c533c884cf60fa349bc7b7e946ff478e5c89f2fb1811234ff8c0ab6c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:30.675265 containerd[1563]: time="2025-05-27T03:17:30.674633365Z" level=info msg="connecting to shim e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205" address="unix:///run/containerd/s/81acc6acf86be61bcc73a3a659f594dc394d39bba290ded10332284ef9bd1495" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:30.690289 systemd[1]: Started cri-containerd-96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c.scope - libcontainer container 96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c. May 27 03:17:30.705166 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:30.841255 systemd[1]: Started cri-containerd-e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205.scope - libcontainer container e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205. May 27 03:17:30.860461 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:30.907531 systemd-networkd[1481]: cali80c2c2e6cd1: Link UP May 27 03:17:30.909887 systemd-networkd[1481]: cali80c2c2e6cd1: Gained carrier May 27 03:17:30.913604 containerd[1563]: time="2025-05-27T03:17:30.913561268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-x6phb,Uid:39fd061e-2caa-44dd-8e80-7921de78311a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c\"" May 27 03:17:30.917687 containerd[1563]: time="2025-05-27T03:17:30.917655651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:17:30.979341 systemd-networkd[1481]: calic9194da7be6: Gained IPv6LL May 27 03:17:30.987644 containerd[1563]: time="2025-05-27T03:17:30.987578540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7d746588-2wvg2,Uid:1f29b750-f6dd-477a-b679-fd6d52c678b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205\"" May 27 03:17:30.995690 containerd[1563]: 2025-05-27 03:17:30.377 [INFO][4013] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:17:30.995690 containerd[1563]: 2025-05-27 03:17:30.413 [INFO][4013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74749cccf6--qp8mf-eth0 whisker-74749cccf6- calico-system 376188e3-cf9d-407d-89a7-68b60ceb1222 944 0 2025-05-27 03:17:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74749cccf6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74749cccf6-qp8mf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali80c2c2e6cd1 [] [] }} ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-" May 27 03:17:30.995690 containerd[1563]: 2025-05-27 03:17:30.413 [INFO][4013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" May 27 03:17:30.995690 containerd[1563]: 2025-05-27 03:17:30.484 [INFO][4047] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" HandleID="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Workload="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.484 [INFO][4047] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" HandleID="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Workload="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74749cccf6-qp8mf", "timestamp":"2025-05-27 03:17:30.484463807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.484 [INFO][4047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.547 [INFO][4047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.548 [INFO][4047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.604 [INFO][4047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" host="localhost" May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.649 [INFO][4047] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.661 [INFO][4047] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.674 [INFO][4047] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.684 [INFO][4047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:30.995977 containerd[1563]: 2025-05-27 03:17:30.685 [INFO][4047] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" host="localhost" May 27 03:17:30.996530 containerd[1563]: 2025-05-27 03:17:30.687 [INFO][4047] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c May 27 03:17:30.996530 containerd[1563]: 2025-05-27 03:17:30.812 [INFO][4047] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" host="localhost" May 27 03:17:30.996530 containerd[1563]: 2025-05-27 03:17:30.894 [INFO][4047] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" host="localhost" May 27 03:17:30.996530 containerd[1563]: 2025-05-27 03:17:30.895 [INFO][4047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" host="localhost" May 27 03:17:30.996530 containerd[1563]: 2025-05-27 03:17:30.895 [INFO][4047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:30.996530 containerd[1563]: 2025-05-27 03:17:30.895 [INFO][4047] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" HandleID="k8s-pod-network.ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Workload="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" May 27 03:17:30.996740 containerd[1563]: 2025-05-27 03:17:30.903 [INFO][4013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74749cccf6--qp8mf-eth0", GenerateName:"whisker-74749cccf6-", Namespace:"calico-system", SelfLink:"", UID:"376188e3-cf9d-407d-89a7-68b60ceb1222", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74749cccf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74749cccf6-qp8mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali80c2c2e6cd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:30.996740 containerd[1563]: 2025-05-27 03:17:30.903 [INFO][4013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" May 27 03:17:30.996818 containerd[1563]: 2025-05-27 03:17:30.903 [INFO][4013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80c2c2e6cd1 ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" May 27 03:17:30.996818 containerd[1563]: 2025-05-27 03:17:30.911 [INFO][4013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" May 27 03:17:30.996875 containerd[1563]: 2025-05-27 03:17:30.911 [INFO][4013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74749cccf6--qp8mf-eth0", GenerateName:"whisker-74749cccf6-", Namespace:"calico-system", SelfLink:"", UID:"376188e3-cf9d-407d-89a7-68b60ceb1222", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74749cccf6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c", Pod:"whisker-74749cccf6-qp8mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali80c2c2e6cd1", MAC:"42:ee:c1:40:79:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:30.996957 containerd[1563]: 2025-05-27 03:17:30.992 [INFO][4013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" Namespace="calico-system" Pod="whisker-74749cccf6-qp8mf" WorkloadEndpoint="localhost-k8s-whisker--74749cccf6--qp8mf-eth0" May 27 03:17:31.043308 systemd-networkd[1481]: caliac99512faa7: Gained IPv6LL May 27 03:17:31.075279 systemd-networkd[1481]: cali3fb0fed2a38: Link UP May 27 03:17:31.078392 systemd-networkd[1481]: cali3fb0fed2a38: Gained carrier May 27 03:17:31.079524 containerd[1563]: time="2025-05-27T03:17:31.078832150Z" level=info msg="connecting to shim 1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea" address="unix:///run/containerd/s/8e287fa801ca7c73af8fc25c63bb710ac2813a3ec9fd9e6764de37533d2e618d" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:31.105849 containerd[1563]: 2025-05-27 03:17:30.429 [INFO][4034] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:17:31.105849 containerd[1563]: 2025-05-27 03:17:30.449 [INFO][4034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0 calico-apiserver-6d77d9d7cb- calico-apiserver bc63af11-46c2-437f-be08-defcd96e797a 825 0 2025-05-27 03:16:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d77d9d7cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d77d9d7cb-xz6g4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fb0fed2a38 [] [] }} ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-" May 27 03:17:31.105849 containerd[1563]: 2025-05-27 03:17:30.449 [INFO][4034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" May 27 03:17:31.105849 containerd[1563]: 2025-05-27 03:17:30.509 [INFO][4059] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" HandleID="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Workload="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:30.510 [INFO][4059] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" HandleID="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Workload="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4e20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d77d9d7cb-xz6g4", "timestamp":"2025-05-27 03:17:30.509730821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:30.510 [INFO][4059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:30.895 [INFO][4059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:30.896 [INFO][4059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:30.903 [INFO][4059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" host="localhost" May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:30.993 [INFO][4059] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:31.011 [INFO][4059] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:31.013 [INFO][4059] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:31.016 [INFO][4059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:31.106144 containerd[1563]: 2025-05-27 03:17:31.016 [INFO][4059] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" host="localhost" May 27 03:17:31.106461 containerd[1563]: 2025-05-27 03:17:31.017 [INFO][4059] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5 May 27 03:17:31.106461 containerd[1563]: 2025-05-27 03:17:31.048 [INFO][4059] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" host="localhost" May 27 03:17:31.106461 containerd[1563]: 2025-05-27 03:17:31.053 [INFO][4059] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" host="localhost" May 27 03:17:31.106461 containerd[1563]: 2025-05-27 03:17:31.053 [INFO][4059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" host="localhost" May 27 03:17:31.106461 containerd[1563]: 2025-05-27 03:17:31.054 [INFO][4059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:31.106461 containerd[1563]: 2025-05-27 03:17:31.054 [INFO][4059] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" HandleID="k8s-pod-network.2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Workload="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" May 27 03:17:31.106658 containerd[1563]: 2025-05-27 03:17:31.060 [INFO][4034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0", GenerateName:"calico-apiserver-6d77d9d7cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc63af11-46c2-437f-be08-defcd96e797a", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77d9d7cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d77d9d7cb-xz6g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fb0fed2a38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:31.106738 containerd[1563]: 2025-05-27 03:17:31.060 [INFO][4034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" May 27 03:17:31.106738 containerd[1563]: 2025-05-27 03:17:31.060 [INFO][4034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fb0fed2a38 ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" May 27 03:17:31.106738 containerd[1563]: 2025-05-27 03:17:31.080 [INFO][4034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" May 27 03:17:31.106832 containerd[1563]: 2025-05-27 03:17:31.081 [INFO][4034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0", GenerateName:"calico-apiserver-6d77d9d7cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc63af11-46c2-437f-be08-defcd96e797a", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d77d9d7cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5", Pod:"calico-apiserver-6d77d9d7cb-xz6g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fb0fed2a38", MAC:"4e:07:9f:af:e4:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:31.106903 containerd[1563]: 2025-05-27 03:17:31.100 [INFO][4034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" Namespace="calico-apiserver" Pod="calico-apiserver-6d77d9d7cb-xz6g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d77d9d7cb--xz6g4-eth0" May 27 03:17:31.126553 systemd[1]: Started cri-containerd-1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea.scope - libcontainer container 1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea. May 27 03:17:31.141606 containerd[1563]: time="2025-05-27T03:17:31.141486114Z" level=info msg="connecting to shim ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c" address="unix:///run/containerd/s/700f614c737ae5ea0bceb2f4bdf494d1ac31c25967a13fa31be7fe57a320ac60" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:31.155179 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:31.189529 systemd[1]: Started cri-containerd-ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c.scope - libcontainer container ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c. May 27 03:17:31.190532 containerd[1563]: time="2025-05-27T03:17:31.190482260Z" level=info msg="connecting to shim 2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5" address="unix:///run/containerd/s/90f78ad2c227974e8ecd549870d69d66d4775c9f07ad9885ea4fce6f70dc5c85" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:31.191928 containerd[1563]: time="2025-05-27T03:17:31.191816013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qdwt9,Uid:55c9e866-18db-4ca8-a823-aa7f4c344902,Namespace:calico-system,Attempt:0,} returns sandbox id \"1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea\"" May 27 03:17:31.194775 systemd-networkd[1481]: caliebed51f85bd: Link UP May 27 03:17:31.195070 systemd-networkd[1481]: caliebed51f85bd: Gained carrier May 27 03:17:31.220740 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:31.230382 systemd[1]: Started cri-containerd-2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5.scope - libcontainer container 2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5. May 27 03:17:31.235987 containerd[1563]: 2025-05-27 03:17:30.532 [INFO][4064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 03:17:31.235987 containerd[1563]: 2025-05-27 03:17:30.555 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0 goldmane-78d55f7ddc- calico-system 25f7f085-7b43-4984-9a62-d6377e95fb7b 826 0 2025-05-27 03:16:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-5ffpl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliebed51f85bd [] [] }} ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-" May 27 03:17:31.235987 containerd[1563]: 2025-05-27 03:17:30.555 [INFO][4064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" May 27 03:17:31.235987 containerd[1563]: 2025-05-27 03:17:30.603 [INFO][4099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" HandleID="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Workload="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:30.603 [INFO][4099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" HandleID="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Workload="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b0b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-5ffpl", "timestamp":"2025-05-27 03:17:30.603100987 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:30.603 [INFO][4099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.054 [INFO][4099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.054 [INFO][4099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.072 [INFO][4099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" host="localhost" May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.101 [INFO][4099] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.114 [INFO][4099] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.120 [INFO][4099] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.130 [INFO][4099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:31.236389 containerd[1563]: 2025-05-27 03:17:31.131 [INFO][4099] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" host="localhost" May 27 03:17:31.236707 containerd[1563]: 2025-05-27 03:17:31.138 [INFO][4099] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39 May 27 03:17:31.236707 containerd[1563]: 2025-05-27 03:17:31.150 [INFO][4099] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" host="localhost" May 27 03:17:31.236707 containerd[1563]: 2025-05-27 03:17:31.166 [INFO][4099] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" host="localhost" May 27 03:17:31.236707 containerd[1563]: 2025-05-27 03:17:31.166 [INFO][4099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" host="localhost" May 27 03:17:31.236707 containerd[1563]: 2025-05-27 03:17:31.166 [INFO][4099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:31.236707 containerd[1563]: 2025-05-27 03:17:31.167 [INFO][4099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" HandleID="k8s-pod-network.4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Workload="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" May 27 03:17:31.236877 containerd[1563]: 2025-05-27 03:17:31.185 [INFO][4064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"25f7f085-7b43-4984-9a62-d6377e95fb7b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-5ffpl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliebed51f85bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:31.236877 containerd[1563]: 2025-05-27 03:17:31.186 [INFO][4064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" May 27 03:17:31.237360 containerd[1563]: 2025-05-27 03:17:31.186 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebed51f85bd ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" May 27 03:17:31.237360 containerd[1563]: 2025-05-27 03:17:31.205 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" May 27 03:17:31.237419 containerd[1563]: 2025-05-27 03:17:31.209 [INFO][4064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"25f7f085-7b43-4984-9a62-d6377e95fb7b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39", Pod:"goldmane-78d55f7ddc-5ffpl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliebed51f85bd", MAC:"26:be:e3:87:e8:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:31.237488 containerd[1563]: 2025-05-27 03:17:31.228 [INFO][4064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5ffpl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5ffpl-eth0" May 27 03:17:31.255442 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:31.277173 containerd[1563]: time="2025-05-27T03:17:31.277056679Z" level=info msg="connecting to shim 4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39" address="unix:///run/containerd/s/0e3beac76d77395fb3d0a45c04b479b78cff7ccb2c2380682233e9528f4f9e59" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:31.312403 containerd[1563]: time="2025-05-27T03:17:31.312305520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74749cccf6-qp8mf,Uid:376188e3-cf9d-407d-89a7-68b60ceb1222,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca5e687fa36b9043446e5ff34bf4957f08d57337b474776c7e6aa6ca2f07283c\"" May 27 03:17:31.314640 systemd[1]: Started cri-containerd-4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39.scope - libcontainer container 4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39. May 27 03:17:31.329390 containerd[1563]: time="2025-05-27T03:17:31.329236079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d77d9d7cb-xz6g4,Uid:bc63af11-46c2-437f-be08-defcd96e797a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5\"" May 27 03:17:31.335231 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:31.377062 containerd[1563]: time="2025-05-27T03:17:31.376910014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5ffpl,Uid:25f7f085-7b43-4984-9a62-d6377e95fb7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a055cce540df993c83d441da41dc1b1bc566a6822a2716bc15326f5535dec39\"" May 27 03:17:31.746285 systemd-networkd[1481]: cali8160660b707: Gained IPv6LL May 27 03:17:32.158124 systemd-networkd[1481]: vxlan.calico: Link UP May 27 03:17:32.158937 systemd-networkd[1481]: vxlan.calico: Gained carrier May 27 03:17:32.324077 systemd-networkd[1481]: cali3fb0fed2a38: Gained IPv6LL May 27 03:17:32.514390 systemd-networkd[1481]: caliebed51f85bd: Gained IPv6LL May 27 03:17:32.899912 systemd-networkd[1481]: cali80c2c2e6cd1: Gained IPv6LL May 27 03:17:33.098832 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:52130.service - OpenSSH per-connection server daemon (10.0.0.1:52130). May 27 03:17:33.176597 sshd[4615]: Accepted publickey for core from 10.0.0.1 port 52130 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:33.178505 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:33.183688 systemd-logind[1547]: New session 9 of user core. May 27 03:17:33.189397 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:17:33.384342 sshd[4617]: Connection closed by 10.0.0.1 port 52130 May 27 03:17:33.386534 sshd-session[4615]: pam_unix(sshd:session): session closed for user core May 27 03:17:33.394932 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:52130.service: Deactivated successfully. May 27 03:17:33.398192 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:17:33.400491 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. May 27 03:17:33.402602 systemd-logind[1547]: Removed session 9. May 27 03:17:34.005973 containerd[1563]: time="2025-05-27T03:17:34.005880587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:34.021575 containerd[1563]: time="2025-05-27T03:17:34.021488181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 27 03:17:34.075792 containerd[1563]: time="2025-05-27T03:17:34.075719591Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:34.106057 containerd[1563]: time="2025-05-27T03:17:34.105982851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:34.106907 containerd[1563]: time="2025-05-27T03:17:34.106851712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.189159251s" May 27 03:17:34.106907 containerd[1563]: time="2025-05-27T03:17:34.106893861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:17:34.108355 containerd[1563]: time="2025-05-27T03:17:34.108123578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 03:17:34.110072 containerd[1563]: time="2025-05-27T03:17:34.109990199Z" level=info msg="CreateContainer within sandbox \"96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:17:34.115413 systemd-networkd[1481]: vxlan.calico: Gained IPv6LL May 27 03:17:34.136362 containerd[1563]: time="2025-05-27T03:17:34.136294452Z" level=info msg="Container 11d81134a45608016e8ed57261ab2b0ca0d65493366933f41c44d0e2cf74f7fb: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:34.153108 containerd[1563]: time="2025-05-27T03:17:34.153027077Z" level=info msg="CreateContainer within sandbox \"96b80d5c6d58d73a0c87c075f225ff8946a3a4d53d7d138b9c2fc6e86e38c38c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"11d81134a45608016e8ed57261ab2b0ca0d65493366933f41c44d0e2cf74f7fb\"" May 27 03:17:34.153828 containerd[1563]: time="2025-05-27T03:17:34.153765792Z" level=info msg="StartContainer for \"11d81134a45608016e8ed57261ab2b0ca0d65493366933f41c44d0e2cf74f7fb\"" May 27 03:17:34.156952 containerd[1563]: time="2025-05-27T03:17:34.156831625Z" level=info msg="connecting to shim 11d81134a45608016e8ed57261ab2b0ca0d65493366933f41c44d0e2cf74f7fb" address="unix:///run/containerd/s/d911e7e2c533c884cf60fa349bc7b7e946ff478e5c89f2fb1811234ff8c0ab6c" protocol=ttrpc version=3 May 27 03:17:34.225327 systemd[1]: Started cri-containerd-11d81134a45608016e8ed57261ab2b0ca0d65493366933f41c44d0e2cf74f7fb.scope - libcontainer container 11d81134a45608016e8ed57261ab2b0ca0d65493366933f41c44d0e2cf74f7fb. May 27 03:17:34.291491 containerd[1563]: time="2025-05-27T03:17:34.291445900Z" level=info msg="StartContainer for \"11d81134a45608016e8ed57261ab2b0ca0d65493366933f41c44d0e2cf74f7fb\" returns successfully" May 27 03:17:35.208108 kubelet[2697]: I0527 03:17:35.207975 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-x6phb" podStartSLOduration=35.017318862 podStartE2EDuration="38.207953481s" podCreationTimestamp="2025-05-27 03:16:57 +0000 UTC" firstStartedPulling="2025-05-27 03:17:30.917327064 +0000 UTC m=+49.600089071" lastFinishedPulling="2025-05-27 03:17:34.107961673 +0000 UTC m=+52.790723690" observedRunningTime="2025-05-27 03:17:35.206749251 +0000 UTC m=+53.889511278" watchObservedRunningTime="2025-05-27 03:17:35.207953481 +0000 UTC m=+53.890715488" May 27 03:17:36.039122 kubelet[2697]: I0527 03:17:36.038258 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:17:37.186527 containerd[1563]: time="2025-05-27T03:17:37.186459745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:37.188735 containerd[1563]: time="2025-05-27T03:17:37.188486150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 27 03:17:37.190633 containerd[1563]: time="2025-05-27T03:17:37.190590284Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:37.193927 containerd[1563]: time="2025-05-27T03:17:37.193867945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:37.194581 containerd[1563]: time="2025-05-27T03:17:37.194501478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 3.086348746s" May 27 03:17:37.194581 containerd[1563]: time="2025-05-27T03:17:37.194546686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 27 03:17:37.196008 containerd[1563]: time="2025-05-27T03:17:37.195960517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 03:17:37.209436 containerd[1563]: time="2025-05-27T03:17:37.209391951Z" level=info msg="CreateContainer within sandbox \"e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 03:17:37.229640 containerd[1563]: time="2025-05-27T03:17:37.229524700Z" level=info msg="Container 5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:37.245821 containerd[1563]: time="2025-05-27T03:17:37.245740081Z" level=info msg="CreateContainer within sandbox \"e4fb3fd350e1dc5e0a589bb99d85826b2ba0bc646df89fe22e4eb8f8a67a0205\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5\"" May 27 03:17:37.246468 containerd[1563]: time="2025-05-27T03:17:37.246442608Z" level=info msg="StartContainer for \"5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5\"" May 27 03:17:37.247836 containerd[1563]: time="2025-05-27T03:17:37.247762537Z" level=info msg="connecting to shim 5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5" address="unix:///run/containerd/s/81acc6acf86be61bcc73a3a659f594dc394d39bba290ded10332284ef9bd1495" protocol=ttrpc version=3 May 27 03:17:37.275329 systemd[1]: Started cri-containerd-5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5.scope - libcontainer container 5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5. May 27 03:17:37.335263 containerd[1563]: time="2025-05-27T03:17:37.335203578Z" level=info msg="StartContainer for \"5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5\" returns successfully" May 27 03:17:38.160926 containerd[1563]: time="2025-05-27T03:17:38.160881281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5\" id:\"928ce9d29eb0d79e394fdee8c19bc7c50049186579b855b7086e848cb97b6892\" pid:4744 exited_at:{seconds:1748315858 nanos:160571152}" May 27 03:17:38.269289 kubelet[2697]: I0527 03:17:38.269040 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f7d746588-2wvg2" podStartSLOduration=32.066852928 podStartE2EDuration="38.269022207s" podCreationTimestamp="2025-05-27 03:17:00 +0000 UTC" firstStartedPulling="2025-05-27 03:17:30.993603876 +0000 UTC m=+49.676365883" lastFinishedPulling="2025-05-27 03:17:37.195773125 +0000 UTC m=+55.878535162" observedRunningTime="2025-05-27 03:17:38.268844725 +0000 UTC m=+56.951606742" watchObservedRunningTime="2025-05-27 03:17:38.269022207 +0000 UTC m=+56.951784214" May 27 03:17:38.400845 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:52144.service - OpenSSH per-connection server daemon (10.0.0.1:52144). May 27 03:17:38.505022 sshd[4756]: Accepted publickey for core from 10.0.0.1 port 52144 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:38.506855 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:38.511762 systemd-logind[1547]: New session 10 of user core. May 27 03:17:38.522238 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:17:38.698051 sshd[4758]: Connection closed by 10.0.0.1 port 52144 May 27 03:17:38.698393 sshd-session[4756]: pam_unix(sshd:session): session closed for user core May 27 03:17:38.703313 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:52144.service: Deactivated successfully. May 27 03:17:38.705839 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:17:38.706875 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. May 27 03:17:38.709029 systemd-logind[1547]: Removed session 10. May 27 03:17:39.420638 kubelet[2697]: E0527 03:17:39.420568 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:39.421147 containerd[1563]: time="2025-05-27T03:17:39.421013477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fprc,Uid:70b90810-5010-4e79-a12b-6628d0a8bf38,Namespace:kube-system,Attempt:0,}" May 27 03:17:39.864179 systemd-networkd[1481]: calif46c222aace: Link UP May 27 03:17:39.864904 systemd-networkd[1481]: calif46c222aace: Gained carrier May 27 03:17:39.923174 containerd[1563]: 2025-05-27 03:17:39.734 [INFO][4773] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2fprc-eth0 coredns-668d6bf9bc- kube-system 70b90810-5010-4e79-a12b-6628d0a8bf38 819 0 2025-05-27 03:16:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2fprc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif46c222aace [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-" May 27 03:17:39.923174 containerd[1563]: 2025-05-27 03:17:39.734 [INFO][4773] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" May 27 03:17:39.923174 containerd[1563]: 2025-05-27 03:17:39.777 [INFO][4788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" HandleID="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Workload="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.777 [INFO][4788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" HandleID="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Workload="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003285c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2fprc", "timestamp":"2025-05-27 03:17:39.777157517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.777 [INFO][4788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.777 [INFO][4788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.777 [INFO][4788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.796 [INFO][4788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" host="localhost" May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.804 [INFO][4788] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.809 [INFO][4788] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.811 [INFO][4788] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.814 [INFO][4788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:39.923517 containerd[1563]: 2025-05-27 03:17:39.814 [INFO][4788] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" host="localhost" May 27 03:17:39.923783 containerd[1563]: 2025-05-27 03:17:39.815 [INFO][4788] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e May 27 03:17:39.923783 containerd[1563]: 2025-05-27 03:17:39.831 [INFO][4788] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" host="localhost" May 27 03:17:39.923783 containerd[1563]: 2025-05-27 03:17:39.857 [INFO][4788] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" host="localhost" May 27 03:17:39.923783 containerd[1563]: 2025-05-27 03:17:39.857 [INFO][4788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" host="localhost" May 27 03:17:39.923783 containerd[1563]: 2025-05-27 03:17:39.857 [INFO][4788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:39.923783 containerd[1563]: 2025-05-27 03:17:39.857 [INFO][4788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" HandleID="k8s-pod-network.a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Workload="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" May 27 03:17:39.923916 containerd[1563]: 2025-05-27 03:17:39.861 [INFO][4773] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2fprc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"70b90810-5010-4e79-a12b-6628d0a8bf38", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2fprc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif46c222aace", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:39.924005 containerd[1563]: 2025-05-27 03:17:39.861 [INFO][4773] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" May 27 03:17:39.924005 containerd[1563]: 2025-05-27 03:17:39.861 [INFO][4773] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif46c222aace ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" May 27 03:17:39.924005 containerd[1563]: 2025-05-27 03:17:39.865 [INFO][4773] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" May 27 03:17:39.924946 containerd[1563]: 2025-05-27 03:17:39.866 [INFO][4773] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2fprc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"70b90810-5010-4e79-a12b-6628d0a8bf38", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e", Pod:"coredns-668d6bf9bc-2fprc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif46c222aace", MAC:"22:01:d0:e6:e9:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:39.924946 containerd[1563]: 2025-05-27 03:17:39.919 [INFO][4773] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" Namespace="kube-system" Pod="coredns-668d6bf9bc-2fprc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2fprc-eth0" May 27 03:17:40.039984 containerd[1563]: time="2025-05-27T03:17:40.039911153Z" level=info msg="connecting to shim a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e" address="unix:///run/containerd/s/c43288ac7651318a3f168481b866644ab9ed64ff71518da207ce655332d3c9f9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:40.086630 systemd[1]: Started cri-containerd-a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e.scope - libcontainer container a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e. May 27 03:17:40.106690 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:40.315049 containerd[1563]: time="2025-05-27T03:17:40.314971948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2fprc,Uid:70b90810-5010-4e79-a12b-6628d0a8bf38,Namespace:kube-system,Attempt:0,} returns sandbox id \"a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e\"" May 27 03:17:40.316506 kubelet[2697]: E0527 03:17:40.316465 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:40.319509 containerd[1563]: time="2025-05-27T03:17:40.319414349Z" level=info msg="CreateContainer within sandbox \"a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:17:40.419990 kubelet[2697]: E0527 03:17:40.419936 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:40.420863 containerd[1563]: time="2025-05-27T03:17:40.420815975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsjbd,Uid:c880cb5f-e0ce-42d6-a0d4-a4c7c967a072,Namespace:kube-system,Attempt:0,}" May 27 03:17:40.491448 containerd[1563]: time="2025-05-27T03:17:40.491167547Z" level=info msg="Container 4442b98a4869370df8efc3df6dd93147f6e0f283e6d8d49d3368b1f55688278f: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:40.528507 containerd[1563]: time="2025-05-27T03:17:40.528451996Z" level=info msg="CreateContainer within sandbox \"a07b5070ce47145ca9bffabb28fad977f0855d39ff52f55fc804e42f8e05286e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4442b98a4869370df8efc3df6dd93147f6e0f283e6d8d49d3368b1f55688278f\"" May 27 03:17:40.531115 containerd[1563]: time="2025-05-27T03:17:40.529453976Z" level=info msg="StartContainer for \"4442b98a4869370df8efc3df6dd93147f6e0f283e6d8d49d3368b1f55688278f\"" May 27 03:17:40.532978 containerd[1563]: time="2025-05-27T03:17:40.532951126Z" level=info msg="connecting to shim 4442b98a4869370df8efc3df6dd93147f6e0f283e6d8d49d3368b1f55688278f" address="unix:///run/containerd/s/c43288ac7651318a3f168481b866644ab9ed64ff71518da207ce655332d3c9f9" protocol=ttrpc version=3 May 27 03:17:40.576419 systemd[1]: Started cri-containerd-4442b98a4869370df8efc3df6dd93147f6e0f283e6d8d49d3368b1f55688278f.scope - libcontainer container 4442b98a4869370df8efc3df6dd93147f6e0f283e6d8d49d3368b1f55688278f. May 27 03:17:40.597830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086244889.mount: Deactivated successfully. May 27 03:17:40.614281 containerd[1563]: time="2025-05-27T03:17:40.614157464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:40.628888 systemd-networkd[1481]: calie327c0bf96d: Link UP May 27 03:17:40.630144 systemd-networkd[1481]: calie327c0bf96d: Gained carrier May 27 03:17:40.633172 containerd[1563]: time="2025-05-27T03:17:40.633105588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.493 [INFO][4861] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0 coredns-668d6bf9bc- kube-system c880cb5f-e0ce-42d6-a0d4-a4c7c967a072 822 0 2025-05-27 03:16:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-gsjbd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie327c0bf96d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.493 [INFO][4861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.540 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" HandleID="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Workload="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.541 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" HandleID="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Workload="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033efa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-gsjbd", "timestamp":"2025-05-27 03:17:40.540801974 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.541 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.541 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.541 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.549 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.557 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.566 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.569 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.572 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.572 [INFO][4876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.574 [INFO][4876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406 May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.586 [INFO][4876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.619 [INFO][4876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.619 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" host="localhost" May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.619 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 03:17:40.667108 containerd[1563]: 2025-05-27 03:17:40.619 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" HandleID="k8s-pod-network.d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Workload="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" May 27 03:17:40.667848 containerd[1563]: 2025-05-27 03:17:40.625 [INFO][4861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c880cb5f-e0ce-42d6-a0d4-a4c7c967a072", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-gsjbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie327c0bf96d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:40.667848 containerd[1563]: 2025-05-27 03:17:40.626 [INFO][4861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" May 27 03:17:40.667848 containerd[1563]: 2025-05-27 03:17:40.626 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie327c0bf96d ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" May 27 03:17:40.667848 containerd[1563]: 2025-05-27 03:17:40.630 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" May 27 03:17:40.667848 containerd[1563]: 2025-05-27 03:17:40.632 [INFO][4861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c880cb5f-e0ce-42d6-a0d4-a4c7c967a072", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 3, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406", Pod:"coredns-668d6bf9bc-gsjbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie327c0bf96d", MAC:"ee:c1:38:3f:35:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 03:17:40.667848 containerd[1563]: 2025-05-27 03:17:40.662 [INFO][4861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" Namespace="kube-system" Pod="coredns-668d6bf9bc-gsjbd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gsjbd-eth0" May 27 03:17:40.671107 containerd[1563]: time="2025-05-27T03:17:40.671013839Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:40.679243 containerd[1563]: time="2025-05-27T03:17:40.679179896Z" level=info msg="StartContainer for \"4442b98a4869370df8efc3df6dd93147f6e0f283e6d8d49d3368b1f55688278f\" returns successfully" May 27 03:17:40.701672 containerd[1563]: time="2025-05-27T03:17:40.701510447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:40.702569 containerd[1563]: time="2025-05-27T03:17:40.702526325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 3.50651461s" May 27 03:17:40.702569 containerd[1563]: time="2025-05-27T03:17:40.702559099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 27 03:17:40.704322 containerd[1563]: time="2025-05-27T03:17:40.704282590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:17:40.705206 containerd[1563]: time="2025-05-27T03:17:40.705139010Z" level=info msg="CreateContainer within sandbox \"1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 03:17:41.019485 containerd[1563]: time="2025-05-27T03:17:41.019417135Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:41.053020 kubelet[2697]: E0527 03:17:41.052985 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:41.299824 containerd[1563]: time="2025-05-27T03:17:41.299656713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:17:41.307055 containerd[1563]: time="2025-05-27T03:17:41.306960958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:41.328413 containerd[1563]: time="2025-05-27T03:17:41.307815133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 03:17:41.328503 kubelet[2697]: E0527 03:17:41.307348 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:41.328503 kubelet[2697]: E0527 03:17:41.307421 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:41.328637 kubelet[2697]: E0527 03:17:41.308575 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed13e36f64f442f99b436eb56f8c067c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74749cccf6-qp8mf_calico-system(376188e3-cf9d-407d-89a7-68b60ceb1222): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:41.380190 kubelet[2697]: I0527 03:17:41.380141 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:17:41.495757 kubelet[2697]: I0527 03:17:41.495687 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2fprc" podStartSLOduration=54.495662757 podStartE2EDuration="54.495662757s" podCreationTimestamp="2025-05-27 03:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:17:41.494068156 +0000 UTC m=+60.176830173" watchObservedRunningTime="2025-05-27 03:17:41.495662757 +0000 UTC m=+60.178424764" May 27 03:17:41.628432 containerd[1563]: time="2025-05-27T03:17:41.628298945Z" level=info msg="Container 21140e5aec09cd5aef06a7c8cf31a44b9f91e0525ad86e66fa9766f85c10af3e: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:41.666430 systemd-networkd[1481]: calif46c222aace: Gained IPv6LL May 27 03:17:42.055401 kubelet[2697]: E0527 03:17:42.055368 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:42.562447 systemd-networkd[1481]: calie327c0bf96d: Gained IPv6LL May 27 03:17:42.581762 containerd[1563]: time="2025-05-27T03:17:42.581707850Z" level=info msg="CreateContainer within sandbox \"1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"21140e5aec09cd5aef06a7c8cf31a44b9f91e0525ad86e66fa9766f85c10af3e\"" May 27 03:17:42.582713 containerd[1563]: time="2025-05-27T03:17:42.582681283Z" level=info msg="StartContainer for \"21140e5aec09cd5aef06a7c8cf31a44b9f91e0525ad86e66fa9766f85c10af3e\"" May 27 03:17:42.585059 containerd[1563]: time="2025-05-27T03:17:42.584937064Z" level=info msg="connecting to shim 21140e5aec09cd5aef06a7c8cf31a44b9f91e0525ad86e66fa9766f85c10af3e" address="unix:///run/containerd/s/8e287fa801ca7c73af8fc25c63bb710ac2813a3ec9fd9e6764de37533d2e618d" protocol=ttrpc version=3 May 27 03:17:42.585908 containerd[1563]: time="2025-05-27T03:17:42.585874298Z" level=info msg="connecting to shim d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406" address="unix:///run/containerd/s/c82da2d9223cb8688be9e46f3ec14201823c103ccbf8e4a50beea5b028a76eba" namespace=k8s.io protocol=ttrpc version=3 May 27 03:17:42.608307 systemd[1]: Started cri-containerd-21140e5aec09cd5aef06a7c8cf31a44b9f91e0525ad86e66fa9766f85c10af3e.scope - libcontainer container 21140e5aec09cd5aef06a7c8cf31a44b9f91e0525ad86e66fa9766f85c10af3e. May 27 03:17:42.645405 systemd[1]: Started cri-containerd-d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406.scope - libcontainer container d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406. May 27 03:17:42.653705 containerd[1563]: time="2025-05-27T03:17:42.653641809Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:42.663571 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 03:17:42.677116 containerd[1563]: time="2025-05-27T03:17:42.676258850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 27 03:17:42.679705 containerd[1563]: time="2025-05-27T03:17:42.679647580Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 1.371800647s" May 27 03:17:42.679705 containerd[1563]: time="2025-05-27T03:17:42.679701965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 27 03:17:42.683111 containerd[1563]: time="2025-05-27T03:17:42.682415858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:17:42.686100 containerd[1563]: time="2025-05-27T03:17:42.684472936Z" level=info msg="CreateContainer within sandbox \"2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 03:17:42.735275 containerd[1563]: time="2025-05-27T03:17:42.735218253Z" level=info msg="StartContainer for \"21140e5aec09cd5aef06a7c8cf31a44b9f91e0525ad86e66fa9766f85c10af3e\" returns successfully" May 27 03:17:42.739967 containerd[1563]: time="2025-05-27T03:17:42.739897838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gsjbd,Uid:c880cb5f-e0ce-42d6-a0d4-a4c7c967a072,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406\"" May 27 03:17:42.740921 kubelet[2697]: E0527 03:17:42.740876 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:42.742878 containerd[1563]: time="2025-05-27T03:17:42.742826053Z" level=info msg="CreateContainer within sandbox \"d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:17:42.919902 containerd[1563]: time="2025-05-27T03:17:42.919785458Z" level=info msg="Container c89a8c86ed0631d392b8fb22f5e117ba36ef57dfc5eddff9f91a7bb22c65cb0c: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:42.930228 containerd[1563]: time="2025-05-27T03:17:42.930166934Z" level=info msg="Container 5a9a72debaa2024e69931821014ad8b64a4f2ef09aa55e4162f438f62154cb20: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:42.943857 containerd[1563]: time="2025-05-27T03:17:42.943796992Z" level=info msg="CreateContainer within sandbox \"2f1cfb12abb86805b83352815d90dc0e492a5b2f0bdf7b7e3eb5ef0343fa32a5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c89a8c86ed0631d392b8fb22f5e117ba36ef57dfc5eddff9f91a7bb22c65cb0c\"" May 27 03:17:42.944786 containerd[1563]: time="2025-05-27T03:17:42.944746628Z" level=info msg="StartContainer for \"c89a8c86ed0631d392b8fb22f5e117ba36ef57dfc5eddff9f91a7bb22c65cb0c\"" May 27 03:17:42.945931 containerd[1563]: time="2025-05-27T03:17:42.945895820Z" level=info msg="connecting to shim c89a8c86ed0631d392b8fb22f5e117ba36ef57dfc5eddff9f91a7bb22c65cb0c" address="unix:///run/containerd/s/90f78ad2c227974e8ecd549870d69d66d4775c9f07ad9885ea4fce6f70dc5c85" protocol=ttrpc version=3 May 27 03:17:42.980288 systemd[1]: Started cri-containerd-c89a8c86ed0631d392b8fb22f5e117ba36ef57dfc5eddff9f91a7bb22c65cb0c.scope - libcontainer container c89a8c86ed0631d392b8fb22f5e117ba36ef57dfc5eddff9f91a7bb22c65cb0c. May 27 03:17:42.991446 containerd[1563]: time="2025-05-27T03:17:42.991396425Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:43.124049 containerd[1563]: time="2025-05-27T03:17:43.123910703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:43.124049 containerd[1563]: time="2025-05-27T03:17:43.123977311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:17:43.124379 kubelet[2697]: E0527 03:17:43.124249 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:43.124379 kubelet[2697]: E0527 03:17:43.124313 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:43.124842 kubelet[2697]: E0527 03:17:43.124636 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5ffpl_calico-system(25f7f085-7b43-4984-9a62-d6377e95fb7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:43.124949 containerd[1563]: time="2025-05-27T03:17:43.124916357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:17:43.126370 kubelet[2697]: E0527 03:17:43.126295 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5ffpl" podUID="25f7f085-7b43-4984-9a62-d6377e95fb7b" May 27 03:17:43.158118 containerd[1563]: time="2025-05-27T03:17:43.157351096Z" level=info msg="CreateContainer within sandbox \"d9c76e23f44ae0c6d1546e6cdad9008446c7745b15697d0aae7551e9d0166406\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a9a72debaa2024e69931821014ad8b64a4f2ef09aa55e4162f438f62154cb20\"" May 27 03:17:43.159282 containerd[1563]: time="2025-05-27T03:17:43.159236842Z" level=info msg="StartContainer for \"5a9a72debaa2024e69931821014ad8b64a4f2ef09aa55e4162f438f62154cb20\"" May 27 03:17:43.159977 containerd[1563]: time="2025-05-27T03:17:43.159943301Z" level=info msg="StartContainer for \"c89a8c86ed0631d392b8fb22f5e117ba36ef57dfc5eddff9f91a7bb22c65cb0c\" returns successfully" May 27 03:17:43.160805 containerd[1563]: time="2025-05-27T03:17:43.160766022Z" level=info msg="connecting to shim 5a9a72debaa2024e69931821014ad8b64a4f2ef09aa55e4162f438f62154cb20" address="unix:///run/containerd/s/c82da2d9223cb8688be9e46f3ec14201823c103ccbf8e4a50beea5b028a76eba" protocol=ttrpc version=3 May 27 03:17:43.195722 systemd[1]: Started cri-containerd-5a9a72debaa2024e69931821014ad8b64a4f2ef09aa55e4162f438f62154cb20.scope - libcontainer container 5a9a72debaa2024e69931821014ad8b64a4f2ef09aa55e4162f438f62154cb20. May 27 03:17:43.384353 containerd[1563]: time="2025-05-27T03:17:43.384310912Z" level=info msg="StartContainer for \"5a9a72debaa2024e69931821014ad8b64a4f2ef09aa55e4162f438f62154cb20\" returns successfully" May 27 03:17:43.395651 containerd[1563]: time="2025-05-27T03:17:43.395596353Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:43.464457 containerd[1563]: time="2025-05-27T03:17:43.464264797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:43.464457 containerd[1563]: time="2025-05-27T03:17:43.464324953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:17:43.464792 kubelet[2697]: E0527 03:17:43.464723 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:43.464880 kubelet[2697]: E0527 03:17:43.464797 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:43.465224 containerd[1563]: time="2025-05-27T03:17:43.465181158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 03:17:43.465284 kubelet[2697]: E0527 03:17:43.465198 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74749cccf6-qp8mf_calico-system(376188e3-cf9d-407d-89a7-68b60ceb1222): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:43.466629 kubelet[2697]: E0527 03:17:43.466582 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-74749cccf6-qp8mf" podUID="376188e3-cf9d-407d-89a7-68b60ceb1222" May 27 03:17:43.713641 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:52206.service - OpenSSH per-connection server daemon (10.0.0.1:52206). May 27 03:17:43.779485 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 52206 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:43.781212 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:43.785841 systemd-logind[1547]: New session 11 of user core. May 27 03:17:43.793274 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:17:44.180963 sshd[5091]: Connection closed by 10.0.0.1 port 52206 May 27 03:17:44.181223 sshd-session[5089]: pam_unix(sshd:session): session closed for user core May 27 03:17:44.185888 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:52206.service: Deactivated successfully. May 27 03:17:44.187542 kubelet[2697]: E0527 03:17:44.187504 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:44.189149 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:17:44.190706 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. May 27 03:17:44.191542 kubelet[2697]: E0527 03:17:44.191421 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5ffpl" podUID="25f7f085-7b43-4984-9a62-d6377e95fb7b" May 27 03:17:44.191803 kubelet[2697]: E0527 03:17:44.191559 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-74749cccf6-qp8mf" podUID="376188e3-cf9d-407d-89a7-68b60ceb1222" May 27 03:17:44.195876 systemd-logind[1547]: Removed session 11. May 27 03:17:44.203766 kubelet[2697]: I0527 03:17:44.203688 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d77d9d7cb-xz6g4" podStartSLOduration=35.856506454 podStartE2EDuration="47.203665254s" podCreationTimestamp="2025-05-27 03:16:57 +0000 UTC" firstStartedPulling="2025-05-27 03:17:31.333568919 +0000 UTC m=+50.016330926" lastFinishedPulling="2025-05-27 03:17:42.680727729 +0000 UTC m=+61.363489726" observedRunningTime="2025-05-27 03:17:43.199745777 +0000 UTC m=+61.882507784" watchObservedRunningTime="2025-05-27 03:17:44.203665254 +0000 UTC m=+62.886427261" May 27 03:17:44.316009 kubelet[2697]: I0527 03:17:44.315876 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gsjbd" podStartSLOduration=57.315853249 podStartE2EDuration="57.315853249s" podCreationTimestamp="2025-05-27 03:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:17:44.315072728 +0000 UTC m=+62.997834755" watchObservedRunningTime="2025-05-27 03:17:44.315853249 +0000 UTC m=+62.998615256" May 27 03:17:45.190068 kubelet[2697]: E0527 03:17:45.190017 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:45.190649 kubelet[2697]: I0527 03:17:45.190193 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:17:46.192141 kubelet[2697]: E0527 03:17:46.192104 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:46.566883 containerd[1563]: time="2025-05-27T03:17:46.566785666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:46.571475 containerd[1563]: time="2025-05-27T03:17:46.571410011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 27 03:17:46.578106 containerd[1563]: time="2025-05-27T03:17:46.577344520Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:46.582018 containerd[1563]: time="2025-05-27T03:17:46.581953876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:17:46.582920 containerd[1563]: time="2025-05-27T03:17:46.582877478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 3.117658267s" May 27 03:17:46.582970 containerd[1563]: time="2025-05-27T03:17:46.582927154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 27 03:17:46.585670 containerd[1563]: time="2025-05-27T03:17:46.585628308Z" level=info msg="CreateContainer within sandbox \"1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 03:17:46.606395 containerd[1563]: time="2025-05-27T03:17:46.606344616Z" level=info msg="Container 1bbcac85ac00722c149d77099d48de8212fcce1a9942519b17f6fe92c571d63f: CDI devices from CRI Config.CDIDevices: []" May 27 03:17:46.637044 containerd[1563]: time="2025-05-27T03:17:46.636289796Z" level=info msg="CreateContainer within sandbox \"1288518254be2788e320f37a6fda89caa9128496371cd949a919a415fc79c7ea\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1bbcac85ac00722c149d77099d48de8212fcce1a9942519b17f6fe92c571d63f\"" May 27 03:17:46.638137 containerd[1563]: time="2025-05-27T03:17:46.637380419Z" level=info msg="StartContainer for \"1bbcac85ac00722c149d77099d48de8212fcce1a9942519b17f6fe92c571d63f\"" May 27 03:17:46.638981 containerd[1563]: time="2025-05-27T03:17:46.638949620Z" level=info msg="connecting to shim 1bbcac85ac00722c149d77099d48de8212fcce1a9942519b17f6fe92c571d63f" address="unix:///run/containerd/s/8e287fa801ca7c73af8fc25c63bb710ac2813a3ec9fd9e6764de37533d2e618d" protocol=ttrpc version=3 May 27 03:17:46.671350 systemd[1]: Started cri-containerd-1bbcac85ac00722c149d77099d48de8212fcce1a9942519b17f6fe92c571d63f.scope - libcontainer container 1bbcac85ac00722c149d77099d48de8212fcce1a9942519b17f6fe92c571d63f. May 27 03:17:46.754404 containerd[1563]: time="2025-05-27T03:17:46.754347298Z" level=info msg="StartContainer for \"1bbcac85ac00722c149d77099d48de8212fcce1a9942519b17f6fe92c571d63f\" returns successfully" May 27 03:17:46.923704 kubelet[2697]: I0527 03:17:46.923566 2697 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 03:17:46.923704 kubelet[2697]: I0527 03:17:46.923606 2697 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 03:17:47.226834 kubelet[2697]: I0527 03:17:47.226636 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qdwt9" podStartSLOduration=31.839463621 podStartE2EDuration="47.226608593s" podCreationTimestamp="2025-05-27 03:17:00 +0000 UTC" firstStartedPulling="2025-05-27 03:17:31.196803522 +0000 UTC m=+49.879565519" lastFinishedPulling="2025-05-27 03:17:46.583948484 +0000 UTC m=+65.266710491" observedRunningTime="2025-05-27 03:17:47.224966133 +0000 UTC m=+65.907728140" watchObservedRunningTime="2025-05-27 03:17:47.226608593 +0000 UTC m=+65.909370600" May 27 03:17:49.195740 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:52220.service - OpenSSH per-connection server daemon (10.0.0.1:52220). May 27 03:17:49.256922 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 52220 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:49.258330 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:49.262810 systemd-logind[1547]: New session 12 of user core. May 27 03:17:49.270215 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:17:49.410232 sshd[5158]: Connection closed by 10.0.0.1 port 52220 May 27 03:17:49.410664 sshd-session[5156]: pam_unix(sshd:session): session closed for user core May 27 03:17:49.427359 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:52220.service: Deactivated successfully. May 27 03:17:49.429321 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:17:49.430099 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. May 27 03:17:49.432729 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:52226.service - OpenSSH per-connection server daemon (10.0.0.1:52226). May 27 03:17:49.433544 systemd-logind[1547]: Removed session 12. May 27 03:17:49.485723 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 52226 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:49.487272 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:49.492032 systemd-logind[1547]: New session 13 of user core. May 27 03:17:49.513237 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:17:49.785966 sshd[5175]: Connection closed by 10.0.0.1 port 52226 May 27 03:17:49.786307 sshd-session[5173]: pam_unix(sshd:session): session closed for user core May 27 03:17:49.795112 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:52226.service: Deactivated successfully. May 27 03:17:49.797225 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:17:49.797920 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. May 27 03:17:49.801222 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:52240.service - OpenSSH per-connection server daemon (10.0.0.1:52240). May 27 03:17:49.801851 systemd-logind[1547]: Removed session 13. May 27 03:17:49.852623 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 52240 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:49.853983 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:49.859907 systemd-logind[1547]: New session 14 of user core. May 27 03:17:49.870218 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:17:50.031561 sshd[5188]: Connection closed by 10.0.0.1 port 52240 May 27 03:17:50.031930 sshd-session[5186]: pam_unix(sshd:session): session closed for user core May 27 03:17:50.036656 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:52240.service: Deactivated successfully. May 27 03:17:50.038959 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:17:50.039864 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. May 27 03:17:50.041029 systemd-logind[1547]: Removed session 14. May 27 03:17:52.056823 kubelet[2697]: E0527 03:17:52.056784 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:17:55.044530 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:35412.service - OpenSSH per-connection server daemon (10.0.0.1:35412). May 27 03:17:55.108036 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 35412 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:17:55.109745 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:17:55.114660 systemd-logind[1547]: New session 15 of user core. May 27 03:17:55.123233 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:17:55.252201 sshd[5215]: Connection closed by 10.0.0.1 port 35412 May 27 03:17:55.252620 sshd-session[5213]: pam_unix(sshd:session): session closed for user core May 27 03:17:55.257798 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:35412.service: Deactivated successfully. May 27 03:17:55.259919 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:17:55.260956 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. May 27 03:17:55.262721 systemd-logind[1547]: Removed session 15. May 27 03:17:56.423145 containerd[1563]: time="2025-05-27T03:17:56.422371120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:17:56.940407 containerd[1563]: time="2025-05-27T03:17:56.940333855Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:57.028553 containerd[1563]: time="2025-05-27T03:17:57.028465852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:17:57.028553 containerd[1563]: time="2025-05-27T03:17:57.028451154Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:57.028903 kubelet[2697]: E0527 03:17:57.028837 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:57.029535 kubelet[2697]: E0527 03:17:57.028916 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:17:57.029818 kubelet[2697]: E0527 03:17:57.029737 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5ffpl_calico-system(25f7f085-7b43-4984-9a62-d6377e95fb7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:57.031020 kubelet[2697]: E0527 03:17:57.030952 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5ffpl" podUID="25f7f085-7b43-4984-9a62-d6377e95fb7b" May 27 03:17:58.178262 containerd[1563]: time="2025-05-27T03:17:58.178215750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa\" id:\"f6c8dfca42f4bc23957a3f95096c541d76bd6f8eecac29e101419293a58198c9\" pid:5242 exit_status:1 exited_at:{seconds:1748315878 nanos:177814574}" May 27 03:17:58.259712 containerd[1563]: time="2025-05-27T03:17:58.259657710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa\" id:\"0735a7087a42b575dc03d0d71a8a3d344849c71a4f864f0609692aafe046af94\" pid:5266 exit_status:1 exited_at:{seconds:1748315878 nanos:259298295}" May 27 03:17:58.421028 containerd[1563]: time="2025-05-27T03:17:58.420983432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:17:59.152844 containerd[1563]: time="2025-05-27T03:17:59.152786816Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:59.298002 containerd[1563]: time="2025-05-27T03:17:59.297897194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:59.298537 containerd[1563]: time="2025-05-27T03:17:59.297971496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:17:59.298620 kubelet[2697]: E0527 03:17:59.298301 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:59.298620 kubelet[2697]: E0527 03:17:59.298365 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:17:59.298620 kubelet[2697]: E0527 03:17:59.298505 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed13e36f64f442f99b436eb56f8c067c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74749cccf6-qp8mf_calico-system(376188e3-cf9d-407d-89a7-68b60ceb1222): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:59.300867 containerd[1563]: time="2025-05-27T03:17:59.300833892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:17:59.570897 containerd[1563]: time="2025-05-27T03:17:59.570832436Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:17:59.594600 containerd[1563]: time="2025-05-27T03:17:59.594536236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:17:59.594600 containerd[1563]: time="2025-05-27T03:17:59.594547368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:17:59.594928 kubelet[2697]: E0527 03:17:59.594862 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:59.595015 kubelet[2697]: E0527 03:17:59.594934 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:17:59.595150 kubelet[2697]: E0527 03:17:59.595108 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74749cccf6-qp8mf_calico-system(376188e3-cf9d-407d-89a7-68b60ceb1222): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:17:59.596454 kubelet[2697]: E0527 03:17:59.596288 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-74749cccf6-qp8mf" podUID="376188e3-cf9d-407d-89a7-68b60ceb1222" May 27 03:18:00.268500 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:35424.service - OpenSSH per-connection server daemon (10.0.0.1:35424). May 27 03:18:00.328326 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 35424 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:00.330133 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:00.335255 systemd-logind[1547]: New session 16 of user core. May 27 03:18:00.342335 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:18:00.483119 sshd[5282]: Connection closed by 10.0.0.1 port 35424 May 27 03:18:00.484544 sshd-session[5280]: pam_unix(sshd:session): session closed for user core May 27 03:18:00.489105 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:35424.service: Deactivated successfully. May 27 03:18:00.491139 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:18:00.492059 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. May 27 03:18:00.493850 systemd-logind[1547]: Removed session 16. May 27 03:18:01.421107 kubelet[2697]: E0527 03:18:01.420674 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:05.500571 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:36620.service - OpenSSH per-connection server daemon (10.0.0.1:36620). May 27 03:18:05.566134 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 36620 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:05.568232 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:05.573911 systemd-logind[1547]: New session 17 of user core. May 27 03:18:05.580224 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:18:05.721633 sshd[5299]: Connection closed by 10.0.0.1 port 36620 May 27 03:18:05.722010 sshd-session[5297]: pam_unix(sshd:session): session closed for user core May 27 03:18:05.727679 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:36620.service: Deactivated successfully. May 27 03:18:05.729919 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:18:05.730771 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. May 27 03:18:05.732453 systemd-logind[1547]: Removed session 17. May 27 03:18:06.420877 kubelet[2697]: E0527 03:18:06.420829 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:07.420863 kubelet[2697]: E0527 03:18:07.420762 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:08.172658 containerd[1563]: time="2025-05-27T03:18:08.172609449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5\" id:\"62427791fcb9fb1ca5e40e9dd8bc5b871da15ebc6d5b23a70514d5d69e611432\" pid:5324 exited_at:{seconds:1748315888 nanos:172162029}" May 27 03:18:08.420420 kubelet[2697]: E0527 03:18:08.420381 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:08.421572 kubelet[2697]: E0527 03:18:08.421527 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5ffpl" podUID="25f7f085-7b43-4984-9a62-d6377e95fb7b" May 27 03:18:10.745179 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:36624.service - OpenSSH per-connection server daemon (10.0.0.1:36624). May 27 03:18:10.803899 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 36624 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:10.805571 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:10.810319 systemd-logind[1547]: New session 18 of user core. May 27 03:18:10.820217 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:18:10.948616 sshd[5338]: Connection closed by 10.0.0.1 port 36624 May 27 03:18:10.948957 sshd-session[5335]: pam_unix(sshd:session): session closed for user core May 27 03:18:10.953263 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:36624.service: Deactivated successfully. May 27 03:18:10.955282 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:18:10.956027 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. May 27 03:18:10.957576 systemd-logind[1547]: Removed session 18. May 27 03:18:14.422320 kubelet[2697]: E0527 03:18:14.422258 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-74749cccf6-qp8mf" podUID="376188e3-cf9d-407d-89a7-68b60ceb1222" May 27 03:18:15.961476 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:54280.service - OpenSSH per-connection server daemon (10.0.0.1:54280). May 27 03:18:16.015705 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 54280 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:16.017652 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:16.023446 systemd-logind[1547]: New session 19 of user core. May 27 03:18:16.033369 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:18:16.174345 sshd[5361]: Connection closed by 10.0.0.1 port 54280 May 27 03:18:16.174694 sshd-session[5359]: pam_unix(sshd:session): session closed for user core May 27 03:18:16.186897 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:54280.service: Deactivated successfully. May 27 03:18:16.189026 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:18:16.189955 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. May 27 03:18:16.193150 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:54282.service - OpenSSH per-connection server daemon (10.0.0.1:54282). May 27 03:18:16.194198 systemd-logind[1547]: Removed session 19. May 27 03:18:16.245495 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 54282 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:16.246856 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:16.252383 systemd-logind[1547]: New session 20 of user core. May 27 03:18:16.262277 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:18:16.752851 sshd[5376]: Connection closed by 10.0.0.1 port 54282 May 27 03:18:16.753231 sshd-session[5374]: pam_unix(sshd:session): session closed for user core May 27 03:18:16.765547 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:54282.service: Deactivated successfully. May 27 03:18:16.767890 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:18:16.768829 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. May 27 03:18:16.772237 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:54288.service - OpenSSH per-connection server daemon (10.0.0.1:54288). May 27 03:18:16.773139 systemd-logind[1547]: Removed session 20. May 27 03:18:16.834680 sshd[5388]: Accepted publickey for core from 10.0.0.1 port 54288 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:16.836186 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:16.841460 systemd-logind[1547]: New session 21 of user core. May 27 03:18:16.847255 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:18:18.365317 sshd[5391]: Connection closed by 10.0.0.1 port 54288 May 27 03:18:18.365793 sshd-session[5388]: pam_unix(sshd:session): session closed for user core May 27 03:18:18.378659 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:54288.service: Deactivated successfully. May 27 03:18:18.381588 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:18:18.382409 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. May 27 03:18:18.386702 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:54298.service - OpenSSH per-connection server daemon (10.0.0.1:54298). May 27 03:18:18.387485 systemd-logind[1547]: Removed session 21. May 27 03:18:18.444373 sshd[5427]: Accepted publickey for core from 10.0.0.1 port 54298 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:18.446255 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:18.451587 systemd-logind[1547]: New session 22 of user core. May 27 03:18:18.466408 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:18:19.336492 sshd[5429]: Connection closed by 10.0.0.1 port 54298 May 27 03:18:19.336921 sshd-session[5427]: pam_unix(sshd:session): session closed for user core May 27 03:18:19.349855 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:54298.service: Deactivated successfully. May 27 03:18:19.351708 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:18:19.353250 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. May 27 03:18:19.356559 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:54306.service - OpenSSH per-connection server daemon (10.0.0.1:54306). May 27 03:18:19.357517 systemd-logind[1547]: Removed session 22. May 27 03:18:19.405150 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 54306 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:19.406386 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:19.410684 systemd-logind[1547]: New session 23 of user core. May 27 03:18:19.421078 containerd[1563]: time="2025-05-27T03:18:19.421031796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 03:18:19.422316 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:18:19.744176 containerd[1563]: time="2025-05-27T03:18:19.744025438Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:18:19.792886 sshd[5444]: Connection closed by 10.0.0.1 port 54306 May 27 03:18:19.793223 sshd-session[5442]: pam_unix(sshd:session): session closed for user core May 27 03:18:19.797616 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:54306.service: Deactivated successfully. May 27 03:18:19.799521 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:18:19.800349 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. May 27 03:18:19.801608 systemd-logind[1547]: Removed session 23. May 27 03:18:19.850350 containerd[1563]: time="2025-05-27T03:18:19.850268666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:18:19.850592 containerd[1563]: time="2025-05-27T03:18:19.850287922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 03:18:19.851317 kubelet[2697]: E0527 03:18:19.850644 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:18:19.851317 kubelet[2697]: E0527 03:18:19.850714 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 03:18:19.851317 kubelet[2697]: E0527 03:18:19.850876 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5ffpl_calico-system(25f7f085-7b43-4984-9a62-d6377e95fb7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:18:19.852542 kubelet[2697]: E0527 03:18:19.852502 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5ffpl" podUID="25f7f085-7b43-4984-9a62-d6377e95fb7b" May 27 03:18:24.815488 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). May 27 03:18:24.880017 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:24.882234 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:24.888715 systemd-logind[1547]: New session 24 of user core. May 27 03:18:24.902371 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:18:25.024748 sshd[5462]: Connection closed by 10.0.0.1 port 57234 May 27 03:18:25.025129 sshd-session[5460]: pam_unix(sshd:session): session closed for user core May 27 03:18:25.029960 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:57234.service: Deactivated successfully. May 27 03:18:25.032175 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:18:25.033325 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. May 27 03:18:25.035224 systemd-logind[1547]: Removed session 24. May 27 03:18:28.293489 containerd[1563]: time="2025-05-27T03:18:28.293444657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f865ac333d74402caffad820616d84daafec6faa6a4b2a5a8e801e121cfaa7fa\" id:\"e8c9dcb96428e83a7d6dabd9c1c74b92b4f1dacaa9ddf2ecc16e189c6aee2f9b\" pid:5486 exited_at:{seconds:1748315908 nanos:293075138}" May 27 03:18:29.421679 containerd[1563]: time="2025-05-27T03:18:29.421401625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 03:18:29.731681 containerd[1563]: time="2025-05-27T03:18:29.731519589Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:18:29.733150 containerd[1563]: time="2025-05-27T03:18:29.733035175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:18:29.733290 containerd[1563]: time="2025-05-27T03:18:29.733102312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 03:18:29.733450 kubelet[2697]: E0527 03:18:29.733392 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:18:29.733891 kubelet[2697]: E0527 03:18:29.733466 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 03:18:29.733891 kubelet[2697]: E0527 03:18:29.733604 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed13e36f64f442f99b436eb56f8c067c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74749cccf6-qp8mf_calico-system(376188e3-cf9d-407d-89a7-68b60ceb1222): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:18:29.735842 containerd[1563]: time="2025-05-27T03:18:29.735802128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 03:18:29.979360 containerd[1563]: time="2025-05-27T03:18:29.979303166Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 03:18:29.982240 containerd[1563]: time="2025-05-27T03:18:29.982109694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 03:18:29.982240 containerd[1563]: time="2025-05-27T03:18:29.982201578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 03:18:29.982477 kubelet[2697]: E0527 03:18:29.982403 2697 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:18:29.982660 kubelet[2697]: E0527 03:18:29.982475 2697 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 03:18:29.982660 kubelet[2697]: E0527 03:18:29.982623 2697 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74749cccf6-qp8mf_calico-system(376188e3-cf9d-407d-89a7-68b60ceb1222): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 03:18:29.983900 kubelet[2697]: E0527 03:18:29.983862 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-74749cccf6-qp8mf" podUID="376188e3-cf9d-407d-89a7-68b60ceb1222" May 27 03:18:30.041384 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:57236.service - OpenSSH per-connection server daemon (10.0.0.1:57236). May 27 03:18:30.090952 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 57236 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:30.093530 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:30.101355 systemd-logind[1547]: New session 25 of user core. May 27 03:18:30.113418 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 03:18:30.251643 sshd[5503]: Connection closed by 10.0.0.1 port 57236 May 27 03:18:30.251950 sshd-session[5501]: pam_unix(sshd:session): session closed for user core May 27 03:18:30.256496 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:57236.service: Deactivated successfully. May 27 03:18:30.258833 systemd[1]: session-25.scope: Deactivated successfully. May 27 03:18:30.260001 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. May 27 03:18:30.261520 systemd-logind[1547]: Removed session 25. May 27 03:18:34.333058 containerd[1563]: time="2025-05-27T03:18:34.333009458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5\" id:\"11058ec246f17c2278a1de3393619e2677b4c0f4ebdc36f2ee5838f674040d99\" pid:5530 exited_at:{seconds:1748315914 nanos:332610505}" May 27 03:18:35.279386 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:36116.service - OpenSSH per-connection server daemon (10.0.0.1:36116). May 27 03:18:35.366924 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 36116 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:35.369502 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:35.375235 systemd-logind[1547]: New session 26 of user core. May 27 03:18:35.387423 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 03:18:35.421550 kubelet[2697]: E0527 03:18:35.421398 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5ffpl" podUID="25f7f085-7b43-4984-9a62-d6377e95fb7b" May 27 03:18:35.530868 sshd[5543]: Connection closed by 10.0.0.1 port 36116 May 27 03:18:35.532164 sshd-session[5541]: pam_unix(sshd:session): session closed for user core May 27 03:18:35.538794 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:36116.service: Deactivated successfully. May 27 03:18:35.541124 systemd[1]: session-26.scope: Deactivated successfully. May 27 03:18:35.542200 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. May 27 03:18:35.543630 systemd-logind[1547]: Removed session 26. May 27 03:18:37.420778 kubelet[2697]: E0527 03:18:37.420735 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 03:18:38.103671 containerd[1563]: time="2025-05-27T03:18:38.103619773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e961ac4c6786aad50744c33dfa4f24161f5a68976f8423dc61673030b0cb1b5\" id:\"3f9b38be68c2b5c2ad4a5abd70a5f7e0f30292ffa3d10c999b3167bbb0b0bcab\" pid:5567 exited_at:{seconds:1748315918 nanos:102932195}" May 27 03:18:40.545485 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:36122.service - OpenSSH per-connection server daemon (10.0.0.1:36122). May 27 03:18:40.599998 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 36122 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:40.602010 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:40.607377 systemd-logind[1547]: New session 27 of user core. May 27 03:18:40.614296 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 03:18:40.787537 sshd[5580]: Connection closed by 10.0.0.1 port 36122 May 27 03:18:40.788358 sshd-session[5578]: pam_unix(sshd:session): session closed for user core May 27 03:18:40.792468 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. May 27 03:18:40.794212 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:36122.service: Deactivated successfully. May 27 03:18:40.797469 systemd[1]: session-27.scope: Deactivated successfully. May 27 03:18:40.800908 systemd-logind[1547]: Removed session 27. May 27 03:18:42.421784 kubelet[2697]: E0527 03:18:42.421708 2697 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-74749cccf6-qp8mf" podUID="376188e3-cf9d-407d-89a7-68b60ceb1222" May 27 03:18:45.808983 systemd[1]: Started sshd@27-10.0.0.59:22-10.0.0.1:42622.service - OpenSSH per-connection server daemon (10.0.0.1:42622). May 27 03:18:45.863250 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 42622 ssh2: RSA SHA256:yrdvci6hXDWGDW7i9bmImWu+5ErcoHe0M1IyHhFSL9U May 27 03:18:45.864885 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:18:45.871378 systemd-logind[1547]: New session 28 of user core. May 27 03:18:45.881366 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 03:18:46.070655 sshd[5600]: Connection closed by 10.0.0.1 port 42622 May 27 03:18:46.071651 sshd-session[5597]: pam_unix(sshd:session): session closed for user core May 27 03:18:46.076469 systemd[1]: sshd@27-10.0.0.59:22-10.0.0.1:42622.service: Deactivated successfully. May 27 03:18:46.078963 systemd[1]: session-28.scope: Deactivated successfully. May 27 03:18:46.079999 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. May 27 03:18:46.081743 systemd-logind[1547]: Removed session 28.