May 16 16:36:37.837287 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 14:52:24 -00 2025 May 16 16:36:37.837310 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:36:37.837321 kernel: BIOS-provided physical RAM map: May 16 16:36:37.837328 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 16:36:37.837334 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 16:36:37.837341 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 16:36:37.837348 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 16:36:37.837355 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 16:36:37.837364 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 16:36:37.837370 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 16:36:37.837377 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 16:36:37.837383 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 16:36:37.837390 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 16:36:37.837396 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 16:36:37.837406 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 16:36:37.837414 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 16:36:37.837420 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 16:36:37.837427 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 16:36:37.837434 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 16:36:37.837441 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 16:36:37.837448 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 16:36:37.837455 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 16:36:37.837462 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 16:36:37.837469 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 16:36:37.837476 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 16:36:37.837485 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 16:36:37.837492 kernel: NX (Execute Disable) protection: active May 16 16:36:37.837499 kernel: APIC: Static calls initialized May 16 16:36:37.837506 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable May 16 16:36:37.837513 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable May 16 16:36:37.837520 kernel: extended physical RAM map: May 16 16:36:37.837527 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 16:36:37.837534 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 16:36:37.837541 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 16:36:37.837551 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 16:36:37.837558 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 16:36:37.837568 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 16:36:37.837575 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 16:36:37.837581 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable May 16 16:36:37.837589 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable May 16 16:36:37.837599 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable May 16 16:36:37.837606 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable May 16 16:36:37.837615 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable May 16 16:36:37.837623 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 16:36:37.837632 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 16:36:37.837640 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 16:36:37.837647 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 16:36:37.837654 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 16:36:37.837661 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable May 16 16:36:37.837668 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved May 16 16:36:37.837676 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS May 16 16:36:37.837685 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable May 16 16:36:37.837693 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 16:36:37.837700 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 16:36:37.837707 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 16:36:37.837714 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 16:36:37.837721 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 16:36:37.837729 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 16:36:37.837736 kernel: efi: EFI v2.7 by EDK II May 16 16:36:37.837743 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 May 16 16:36:37.837751 kernel: random: crng init done May 16 16:36:37.837758 kernel: efi: Remove mem149: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 16:36:37.837765 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 16:36:37.837775 kernel: secureboot: Secure boot disabled May 16 16:36:37.837782 kernel: SMBIOS 2.8 present. May 16 16:36:37.837789 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 16:36:37.837796 kernel: DMI: Memory slots populated: 1/1 May 16 16:36:37.837803 kernel: Hypervisor detected: KVM May 16 16:36:37.837811 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 16:36:37.837818 kernel: kvm-clock: using sched offset of 4114539740 cycles May 16 16:36:37.837825 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 16:36:37.837833 kernel: tsc: Detected 2794.748 MHz processor May 16 16:36:37.837841 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 16:36:37.837848 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 16:36:37.837858 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 16:36:37.837865 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 16:36:37.837873 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 16:36:37.837883 kernel: Using GB pages for direct mapping May 16 16:36:37.837890 kernel: ACPI: Early table checksum verification disabled May 16 16:36:37.837898 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 16:36:37.837905 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 16:36:37.837926 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:37.837933 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:37.837943 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 16:36:37.837951 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:37.837958 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:37.837966 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:37.837973 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:36:37.837981 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 16:36:37.837988 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 16:36:37.837995 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 16:36:37.838005 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 16:36:37.838013 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 16:36:37.838020 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 16:36:37.838027 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 16:36:37.838035 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 16:36:37.838042 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 16:36:37.838049 kernel: No NUMA configuration found May 16 16:36:37.838057 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 16:36:37.838064 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] May 16 16:36:37.838074 kernel: Zone ranges: May 16 16:36:37.838081 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 16:36:37.838089 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 16:36:37.838096 kernel: Normal empty May 16 16:36:37.838103 kernel: Device empty May 16 16:36:37.838111 kernel: Movable zone start for each node May 16 16:36:37.838118 kernel: Early memory node ranges May 16 16:36:37.838125 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 16:36:37.838139 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 16:36:37.838146 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 16:36:37.838156 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 16:36:37.838164 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 16:36:37.838171 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 16:36:37.838178 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] May 16 16:36:37.838185 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] May 16 16:36:37.838193 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 16:36:37.838201 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 16:36:37.838208 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 16:36:37.838225 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 16:36:37.838233 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 16:36:37.838241 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 16:36:37.838248 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 16:36:37.838258 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 16:36:37.838266 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 16:36:37.838273 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 16:36:37.838281 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 16:36:37.838293 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 16:36:37.838309 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 16:36:37.838331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 16:36:37.838441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 16:36:37.838452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 16:36:37.838460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 16:36:37.838468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 16:36:37.838475 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 16:36:37.838483 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 16:36:37.838491 kernel: TSC deadline timer available May 16 16:36:37.838501 kernel: CPU topo: Max. logical packages: 1 May 16 16:36:37.838509 kernel: CPU topo: Max. logical dies: 1 May 16 16:36:37.838516 kernel: CPU topo: Max. dies per package: 1 May 16 16:36:37.838524 kernel: CPU topo: Max. threads per core: 1 May 16 16:36:37.838531 kernel: CPU topo: Num. cores per package: 4 May 16 16:36:37.838539 kernel: CPU topo: Num. threads per package: 4 May 16 16:36:37.838547 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs May 16 16:36:37.838554 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 16:36:37.838562 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 16:36:37.838573 kernel: kvm-guest: setup PV sched yield May 16 16:36:37.838583 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 16:36:37.838591 kernel: Booting paravirtualized kernel on KVM May 16 16:36:37.838601 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 16:36:37.838609 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 16:36:37.838616 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 May 16 16:36:37.838624 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 May 16 16:36:37.838632 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 16:36:37.838639 kernel: kvm-guest: PV spinlocks enabled May 16 16:36:37.838649 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 16:36:37.838658 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:36:37.838667 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 16:36:37.838674 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 16:36:37.838682 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 16:36:37.838690 kernel: Fallback order for Node 0: 0 May 16 16:36:37.838698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 May 16 16:36:37.838705 kernel: Policy zone: DMA32 May 16 16:36:37.838715 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 16:36:37.838723 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 16:36:37.838730 kernel: ftrace: allocating 40065 entries in 157 pages May 16 16:36:37.838738 kernel: ftrace: allocated 157 pages with 5 groups May 16 16:36:37.838746 kernel: Dynamic Preempt: voluntary May 16 16:36:37.838754 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 16:36:37.838764 kernel: rcu: RCU event tracing is enabled. May 16 16:36:37.838772 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 16:36:37.838780 kernel: Trampoline variant of Tasks RCU enabled. May 16 16:36:37.838791 kernel: Rude variant of Tasks RCU enabled. May 16 16:36:37.838799 kernel: Tracing variant of Tasks RCU enabled. May 16 16:36:37.838807 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 16:36:37.838814 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 16:36:37.838822 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:36:37.838830 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:36:37.838838 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:36:37.838846 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 16:36:37.838853 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 16:36:37.838863 kernel: Console: colour dummy device 80x25 May 16 16:36:37.838871 kernel: printk: legacy console [ttyS0] enabled May 16 16:36:37.838878 kernel: ACPI: Core revision 20240827 May 16 16:36:37.838886 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 16:36:37.838901 kernel: APIC: Switch to symmetric I/O mode setup May 16 16:36:37.838935 kernel: x2apic enabled May 16 16:36:37.838952 kernel: APIC: Switched APIC routing to: physical x2apic May 16 16:36:37.838960 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 16:36:37.838968 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 16:36:37.838979 kernel: kvm-guest: setup PV IPIs May 16 16:36:37.838987 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 16:36:37.838994 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 16:36:37.839002 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 16:36:37.839010 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 16:36:37.839018 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 16:36:37.839025 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 16:36:37.839033 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 16:36:37.839041 kernel: Spectre V2 : Mitigation: Retpolines May 16 16:36:37.839050 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 16 16:36:37.839058 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 16 16:36:37.839066 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 16:36:37.839073 kernel: RETBleed: Mitigation: untrained return thunk May 16 16:36:37.839081 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 16:36:37.839089 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 16:36:37.839097 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 16:36:37.839105 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 16:36:37.839113 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 16:36:37.839122 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 16:36:37.839139 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 16:36:37.839159 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 16:36:37.839170 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 16:36:37.839178 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 16:36:37.839186 kernel: Freeing SMP alternatives memory: 32K May 16 16:36:37.839194 kernel: pid_max: default: 32768 minimum: 301 May 16 16:36:37.839202 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 16:36:37.839212 kernel: landlock: Up and running. May 16 16:36:37.839228 kernel: SELinux: Initializing. May 16 16:36:37.839236 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:36:37.839244 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:36:37.839252 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 16:36:37.839260 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 16:36:37.839267 kernel: ... version: 0 May 16 16:36:37.839275 kernel: ... bit width: 48 May 16 16:36:37.839282 kernel: ... generic registers: 6 May 16 16:36:37.839292 kernel: ... value mask: 0000ffffffffffff May 16 16:36:37.839300 kernel: ... max period: 00007fffffffffff May 16 16:36:37.839307 kernel: ... fixed-purpose events: 0 May 16 16:36:37.839315 kernel: ... event mask: 000000000000003f May 16 16:36:37.839323 kernel: signal: max sigframe size: 1776 May 16 16:36:37.839330 kernel: rcu: Hierarchical SRCU implementation. May 16 16:36:37.839338 kernel: rcu: Max phase no-delay instances is 400. May 16 16:36:37.839346 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 16:36:37.839353 kernel: smp: Bringing up secondary CPUs ... May 16 16:36:37.839361 kernel: smpboot: x86: Booting SMP configuration: May 16 16:36:37.839370 kernel: .... node #0, CPUs: #1 #2 #3 May 16 16:36:37.839378 kernel: smp: Brought up 1 node, 4 CPUs May 16 16:36:37.839386 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 16:36:37.839394 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 137196K reserved, 0K cma-reserved) May 16 16:36:37.839401 kernel: devtmpfs: initialized May 16 16:36:37.839409 kernel: x86/mm: Memory block size: 128MB May 16 16:36:37.839417 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 16:36:37.839424 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 16:36:37.839432 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 16:36:37.839442 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 16:36:37.839450 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) May 16 16:36:37.839457 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 16:36:37.839465 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 16:36:37.839473 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 16:36:37.839481 kernel: pinctrl core: initialized pinctrl subsystem May 16 16:36:37.839488 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 16:36:37.839496 kernel: audit: initializing netlink subsys (disabled) May 16 16:36:37.839505 kernel: audit: type=2000 audit(1747413395.879:1): state=initialized audit_enabled=0 res=1 May 16 16:36:37.839513 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 16:36:37.839521 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 16:36:37.839528 kernel: cpuidle: using governor menu May 16 16:36:37.839536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 16:36:37.839544 kernel: dca service started, version 1.12.1 May 16 16:36:37.839552 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] May 16 16:36:37.839559 kernel: PCI: Using configuration type 1 for base access May 16 16:36:37.839567 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 16:36:37.839577 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 16:36:37.839584 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 16:36:37.839592 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 16:36:37.839599 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 16:36:37.839607 kernel: ACPI: Added _OSI(Module Device) May 16 16:36:37.839615 kernel: ACPI: Added _OSI(Processor Device) May 16 16:36:37.839623 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 16:36:37.839630 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 16:36:37.839638 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 16:36:37.839647 kernel: ACPI: Interpreter enabled May 16 16:36:37.839655 kernel: ACPI: PM: (supports S0 S3 S5) May 16 16:36:37.839663 kernel: ACPI: Using IOAPIC for interrupt routing May 16 16:36:37.839670 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 16:36:37.839678 kernel: PCI: Using E820 reservations for host bridge windows May 16 16:36:37.839685 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 16:36:37.839693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 16:36:37.839967 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 16:36:37.840099 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 16:36:37.850584 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 16:36:37.850620 kernel: PCI host bridge to bus 0000:00 May 16 16:36:37.850760 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 16:36:37.850868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 16:36:37.851002 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 16:36:37.851112 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 16:36:37.851242 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 16:36:37.851368 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 16:36:37.851514 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 16:36:37.851679 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 16 16:36:37.851813 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 16 16:36:37.852758 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] May 16 16:36:37.854347 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] May 16 16:36:37.854474 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] May 16 16:36:37.854587 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 16:36:37.854716 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 16:36:37.854832 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] May 16 16:36:37.854965 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] May 16 16:36:37.855082 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] May 16 16:36:37.855253 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 16 16:36:37.855374 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] May 16 16:36:37.855489 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] May 16 16:36:37.855611 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] May 16 16:36:37.855750 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 16 16:36:37.855867 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] May 16 16:36:37.856011 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] May 16 16:36:37.856139 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 16:36:37.856255 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] May 16 16:36:37.856385 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 16 16:36:37.856506 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 16:36:37.856644 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 16 16:36:37.856767 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] May 16 16:36:37.856892 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] May 16 16:36:37.857159 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 16 16:36:37.857278 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] May 16 16:36:37.857289 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 16:36:37.857297 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 16:36:37.857305 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 16:36:37.857313 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 16:36:37.857321 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 16:36:37.857329 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 16:36:37.857341 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 16:36:37.857349 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 16:36:37.857357 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 16:36:37.857365 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 16:36:37.857373 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 16:36:37.857381 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 16:36:37.857389 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 16:36:37.857398 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 16:36:37.857406 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 16:36:37.857416 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 16:36:37.857424 kernel: iommu: Default domain type: Translated May 16 16:36:37.857432 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 16:36:37.857440 kernel: efivars: Registered efivars operations May 16 16:36:37.857448 kernel: PCI: Using ACPI for IRQ routing May 16 16:36:37.857456 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 16:36:37.857465 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 16:36:37.857473 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 16:36:37.857480 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] May 16 16:36:37.857491 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] May 16 16:36:37.857499 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 16:36:37.857507 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 16:36:37.857515 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] May 16 16:36:37.857523 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 16:36:37.857641 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 16:36:37.857774 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 16:36:37.857932 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 16:36:37.857953 kernel: vgaarb: loaded May 16 16:36:37.857963 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 16:36:37.857974 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 16:36:37.857984 kernel: clocksource: Switched to clocksource kvm-clock May 16 16:36:37.857995 kernel: VFS: Disk quotas dquot_6.6.0 May 16 16:36:37.858006 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 16:36:37.858017 kernel: pnp: PnP ACPI init May 16 16:36:37.858204 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 16:36:37.858224 kernel: pnp: PnP ACPI: found 6 devices May 16 16:36:37.858233 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 16:36:37.858241 kernel: NET: Registered PF_INET protocol family May 16 16:36:37.858249 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 16:36:37.858258 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 16:36:37.858266 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 16:36:37.858275 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 16:36:37.858283 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 16:36:37.858294 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 16:36:37.858302 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:36:37.858310 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:36:37.858318 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 16:36:37.858326 kernel: NET: Registered PF_XDP protocol family May 16 16:36:37.858443 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window May 16 16:36:37.858559 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned May 16 16:36:37.858671 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 16:36:37.858791 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 16:36:37.858898 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 16:36:37.859083 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 16:36:37.859198 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 16:36:37.859302 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 16:36:37.859312 kernel: PCI: CLS 0 bytes, default 64 May 16 16:36:37.859321 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns May 16 16:36:37.859330 kernel: Initialise system trusted keyrings May 16 16:36:37.859342 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 16:36:37.859351 kernel: Key type asymmetric registered May 16 16:36:37.859359 kernel: Asymmetric key parser 'x509' registered May 16 16:36:37.859367 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 16 16:36:37.859376 kernel: io scheduler mq-deadline registered May 16 16:36:37.859384 kernel: io scheduler kyber registered May 16 16:36:37.859392 kernel: io scheduler bfq registered May 16 16:36:37.859403 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 16:36:37.859412 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 16:36:37.859420 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 16:36:37.859428 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 16:36:37.859436 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 16:36:37.859445 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 16:36:37.859453 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 16:36:37.859461 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 16:36:37.859469 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 16:36:37.859609 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 16:36:37.859623 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 16:36:37.859741 kernel: rtc_cmos 00:04: registered as rtc0 May 16 16:36:37.859856 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T16:36:37 UTC (1747413397) May 16 16:36:37.859993 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 16:36:37.860005 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 16:36:37.860013 kernel: efifb: probing for efifb May 16 16:36:37.860026 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 16:36:37.860034 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 16:36:37.860042 kernel: efifb: scrolling: redraw May 16 16:36:37.860050 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 16:36:37.860059 kernel: Console: switching to colour frame buffer device 160x50 May 16 16:36:37.860067 kernel: fb0: EFI VGA frame buffer device May 16 16:36:37.860075 kernel: pstore: Using crash dump compression: deflate May 16 16:36:37.860084 kernel: pstore: Registered efi_pstore as persistent store backend May 16 16:36:37.860092 kernel: NET: Registered PF_INET6 protocol family May 16 16:36:37.860100 kernel: Segment Routing with IPv6 May 16 16:36:37.860111 kernel: In-situ OAM (IOAM) with IPv6 May 16 16:36:37.860119 kernel: NET: Registered PF_PACKET protocol family May 16 16:36:37.860128 kernel: Key type dns_resolver registered May 16 16:36:37.860144 kernel: IPI shorthand broadcast: enabled May 16 16:36:37.860152 kernel: sched_clock: Marking stable (3460003056, 157117842)->(3647770544, -30649646) May 16 16:36:37.860161 kernel: registered taskstats version 1 May 16 16:36:37.860169 kernel: Loading compiled-in X.509 certificates May 16 16:36:37.860178 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 310304ddc2cf6c43796c9bf79d11c0543afdf71f' May 16 16:36:37.860186 kernel: Demotion targets for Node 0: null May 16 16:36:37.860197 kernel: Key type .fscrypt registered May 16 16:36:37.860205 kernel: Key type fscrypt-provisioning registered May 16 16:36:37.860213 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 16:36:37.860222 kernel: ima: Allocated hash algorithm: sha1 May 16 16:36:37.860230 kernel: ima: No architecture policies found May 16 16:36:37.860238 kernel: clk: Disabling unused clocks May 16 16:36:37.860246 kernel: Warning: unable to open an initial console. May 16 16:36:37.860255 kernel: Freeing unused kernel image (initmem) memory: 54416K May 16 16:36:37.860266 kernel: Write protecting the kernel read-only data: 24576k May 16 16:36:37.860274 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 16 16:36:37.860282 kernel: Run /init as init process May 16 16:36:37.860290 kernel: with arguments: May 16 16:36:37.860298 kernel: /init May 16 16:36:37.860306 kernel: with environment: May 16 16:36:37.860314 kernel: HOME=/ May 16 16:36:37.860322 kernel: TERM=linux May 16 16:36:37.860330 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 16:36:37.860343 systemd[1]: Successfully made /usr/ read-only. May 16 16:36:37.860359 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:36:37.860369 systemd[1]: Detected virtualization kvm. May 16 16:36:37.860378 systemd[1]: Detected architecture x86-64. May 16 16:36:37.860387 systemd[1]: Running in initrd. May 16 16:36:37.860396 systemd[1]: No hostname configured, using default hostname. May 16 16:36:37.860405 systemd[1]: Hostname set to . May 16 16:36:37.860417 systemd[1]: Initializing machine ID from VM UUID. May 16 16:36:37.860426 systemd[1]: Queued start job for default target initrd.target. May 16 16:36:37.860435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:36:37.860444 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:36:37.860454 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 16:36:37.860463 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:36:37.860472 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 16:36:37.860482 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 16:36:37.860495 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 16:36:37.860505 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 16:36:37.860514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:36:37.860526 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:36:37.860535 systemd[1]: Reached target paths.target - Path Units. May 16 16:36:37.860544 systemd[1]: Reached target slices.target - Slice Units. May 16 16:36:37.860553 systemd[1]: Reached target swap.target - Swaps. May 16 16:36:37.860562 systemd[1]: Reached target timers.target - Timer Units. May 16 16:36:37.860574 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:36:37.860583 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:36:37.860595 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 16:36:37.860604 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 16:36:37.860615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:36:37.860625 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:36:37.860634 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:36:37.860643 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:36:37.860654 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 16:36:37.860663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:36:37.860672 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 16:36:37.860682 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 16:36:37.860691 systemd[1]: Starting systemd-fsck-usr.service... May 16 16:36:37.860701 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:36:37.860710 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:36:37.860719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:37.860728 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 16:36:37.860740 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:36:37.860750 systemd[1]: Finished systemd-fsck-usr.service. May 16 16:36:37.860790 systemd-journald[220]: Collecting audit messages is disabled. May 16 16:36:37.860820 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:36:37.860831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:37.860842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 16:36:37.860852 systemd-journald[220]: Journal started May 16 16:36:37.860881 systemd-journald[220]: Runtime Journal (/run/log/journal/5ba1edda81244ced8bbcf186e2fd56c7) is 6M, max 48.5M, 42.4M free. May 16 16:36:37.863502 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:36:37.863838 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:36:37.867291 systemd-modules-load[222]: Inserted module 'overlay' May 16 16:36:37.870359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:36:37.872063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:36:37.886211 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:36:37.887815 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 16:36:37.892517 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 16:36:37.895198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:36:37.896998 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:36:37.899931 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 16:36:37.903245 systemd-modules-load[222]: Inserted module 'br_netfilter' May 16 16:36:37.904339 kernel: Bridge firewalling registered May 16 16:36:37.907211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:36:37.909961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:36:37.924476 dracut-cmdline[254]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e3be1f8a550c199f4f838f30cb661b44d98bde818b7f263cba125cc457a9c137 May 16 16:36:37.937617 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:36:37.939419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:36:37.982345 systemd-resolved[290]: Positive Trust Anchors: May 16 16:36:37.982363 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:36:37.982395 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:36:37.985048 systemd-resolved[290]: Defaulting to hostname 'linux'. May 16 16:36:37.986286 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:36:37.992182 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:36:38.042961 kernel: SCSI subsystem initialized May 16 16:36:38.051938 kernel: Loading iSCSI transport class v2.0-870. May 16 16:36:38.062942 kernel: iscsi: registered transport (tcp) May 16 16:36:38.084972 kernel: iscsi: registered transport (qla4xxx) May 16 16:36:38.085040 kernel: QLogic iSCSI HBA Driver May 16 16:36:38.106414 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:36:38.128340 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:36:38.132284 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:36:38.194809 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 16:36:38.196552 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 16:36:38.263937 kernel: raid6: avx2x4 gen() 30416 MB/s May 16 16:36:38.280933 kernel: raid6: avx2x2 gen() 30792 MB/s May 16 16:36:38.298002 kernel: raid6: avx2x1 gen() 25018 MB/s May 16 16:36:38.298073 kernel: raid6: using algorithm avx2x2 gen() 30792 MB/s May 16 16:36:38.316009 kernel: raid6: .... xor() 19819 MB/s, rmw enabled May 16 16:36:38.316077 kernel: raid6: using avx2x2 recovery algorithm May 16 16:36:38.336946 kernel: xor: automatically using best checksumming function avx May 16 16:36:38.504962 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 16:36:38.514096 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 16:36:38.515889 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:36:38.546837 systemd-udevd[473]: Using default interface naming scheme 'v255'. May 16 16:36:38.552641 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:36:38.553465 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 16:36:38.577979 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation May 16 16:36:38.609740 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:36:38.611493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:36:38.679014 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:36:38.680884 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 16:36:38.718945 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 16:36:38.762476 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 16:36:38.762630 kernel: cryptd: max_cpu_qlen set to 1000 May 16 16:36:38.762642 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 16:36:38.762653 kernel: GPT:9289727 != 19775487 May 16 16:36:38.762662 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 16:36:38.762673 kernel: GPT:9289727 != 19775487 May 16 16:36:38.762682 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 16:36:38.762696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:36:38.762709 kernel: AES CTR mode by8 optimization enabled May 16 16:36:38.762723 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 16:36:38.762733 kernel: libata version 3.00 loaded. May 16 16:36:38.753652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:36:38.753772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:38.756395 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:38.760851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:38.771299 kernel: ahci 0000:00:1f.2: version 3.0 May 16 16:36:38.800531 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 16:36:38.800549 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 16 16:36:38.800697 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 16 16:36:38.800839 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 16:36:38.800993 kernel: scsi host0: ahci May 16 16:36:38.801186 kernel: scsi host1: ahci May 16 16:36:38.801330 kernel: scsi host2: ahci May 16 16:36:38.801473 kernel: scsi host3: ahci May 16 16:36:38.802021 kernel: scsi host4: ahci May 16 16:36:38.802174 kernel: scsi host5: ahci May 16 16:36:38.802340 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 May 16 16:36:38.802353 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 May 16 16:36:38.802363 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 May 16 16:36:38.802374 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 May 16 16:36:38.802384 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 May 16 16:36:38.802394 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 May 16 16:36:38.772338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:36:38.772459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:38.777472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:38.810576 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:38.821606 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 16:36:38.831267 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 16:36:38.839630 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 16:36:38.839710 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 16:36:38.852713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:36:38.855338 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 16:36:38.878343 disk-uuid[637]: Primary Header is updated. May 16 16:36:38.878343 disk-uuid[637]: Secondary Entries is updated. May 16 16:36:38.878343 disk-uuid[637]: Secondary Header is updated. May 16 16:36:38.882939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:36:38.886940 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:36:39.111435 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 16:36:39.111524 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 16:36:39.111535 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 16:36:39.112952 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 16:36:39.113960 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 16:36:39.114947 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 16:36:39.114963 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 16:36:39.115422 kernel: ata3.00: applying bridge limits May 16 16:36:39.116943 kernel: ata3.00: configured for UDMA/100 May 16 16:36:39.116968 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 16:36:39.161941 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 16:36:39.175845 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 16:36:39.175871 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 16:36:39.473067 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 16:36:39.475729 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:36:39.478222 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:36:39.480518 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:36:39.483375 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 16:36:39.515260 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 16:36:39.888511 disk-uuid[638]: The operation has completed successfully. May 16 16:36:39.889834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:36:39.917883 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 16:36:39.918012 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 16:36:39.954642 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 16:36:39.967111 sh[667]: Success May 16 16:36:39.984063 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 16:36:39.984104 kernel: device-mapper: uevent: version 1.0.3 May 16 16:36:39.985163 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 16:36:39.993940 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 16 16:36:40.024981 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 16:36:40.028783 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 16:36:40.045617 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 16:36:40.053601 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 16:36:40.053629 kernel: BTRFS: device fsid 85b2a34c-237f-4a0a-87d0-0a783de0f256 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (679) May 16 16:36:40.056024 kernel: BTRFS info (device dm-0): first mount of filesystem 85b2a34c-237f-4a0a-87d0-0a783de0f256 May 16 16:36:40.056052 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:40.056067 kernel: BTRFS info (device dm-0): using free-space-tree May 16 16:36:40.061242 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 16:36:40.063409 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 16:36:40.065651 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 16:36:40.068272 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 16:36:40.071067 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 16:36:40.099945 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (712) May 16 16:36:40.099991 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:40.102514 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:40.102548 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:36:40.108989 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:40.110319 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 16:36:40.112399 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 16:36:40.187986 ignition[757]: Ignition 2.21.0 May 16 16:36:40.189448 ignition[757]: Stage: fetch-offline May 16 16:36:40.189492 ignition[757]: no configs at "/usr/lib/ignition/base.d" May 16 16:36:40.189504 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:40.189590 ignition[757]: parsed url from cmdline: "" May 16 16:36:40.189594 ignition[757]: no config URL provided May 16 16:36:40.189599 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" May 16 16:36:40.192791 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:36:40.189607 ignition[757]: no config at "/usr/lib/ignition/user.ign" May 16 16:36:40.197111 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:36:40.189631 ignition[757]: op(1): [started] loading QEMU firmware config module May 16 16:36:40.189636 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 16:36:40.199428 ignition[757]: op(1): [finished] loading QEMU firmware config module May 16 16:36:40.242847 systemd-networkd[857]: lo: Link UP May 16 16:36:40.242855 systemd-networkd[857]: lo: Gained carrier May 16 16:36:40.243168 ignition[757]: parsing config with SHA512: 6099020da1e7427e3cafd2f926b5d2ef36f0f1f96e61e7ebc4dff79368e6f3b8acf91f2f37307c3e98157ed0f6bfdaf48da9d1b4f1d269973e5a5fe012acb8bb May 16 16:36:40.244473 systemd-networkd[857]: Enumeration completed May 16 16:36:40.244579 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:36:40.245101 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:36:40.247322 ignition[757]: fetch-offline: fetch-offline passed May 16 16:36:40.245107 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:36:40.247389 ignition[757]: Ignition finished successfully May 16 16:36:40.245527 systemd-networkd[857]: eth0: Link UP May 16 16:36:40.245530 systemd-networkd[857]: eth0: Gained carrier May 16 16:36:40.245538 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:36:40.246868 unknown[757]: fetched base config from "system" May 16 16:36:40.246878 unknown[757]: fetched user config from "qemu" May 16 16:36:40.258428 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:36:40.261763 systemd[1]: Reached target network.target - Network. May 16 16:36:40.263604 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 16:36:40.264972 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:36:40.267718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 16:36:40.308417 ignition[862]: Ignition 2.21.0 May 16 16:36:40.308430 ignition[862]: Stage: kargs May 16 16:36:40.308568 ignition[862]: no configs at "/usr/lib/ignition/base.d" May 16 16:36:40.308579 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:40.311218 ignition[862]: kargs: kargs passed May 16 16:36:40.311320 ignition[862]: Ignition finished successfully May 16 16:36:40.315779 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 16:36:40.316995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 16:36:40.348564 ignition[870]: Ignition 2.21.0 May 16 16:36:40.348589 ignition[870]: Stage: disks May 16 16:36:40.348822 ignition[870]: no configs at "/usr/lib/ignition/base.d" May 16 16:36:40.348836 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:40.350696 ignition[870]: disks: disks passed May 16 16:36:40.350756 ignition[870]: Ignition finished successfully May 16 16:36:40.355376 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 16:36:40.355664 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 16:36:40.358418 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 16:36:40.360719 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:36:40.362812 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:36:40.364782 systemd[1]: Reached target basic.target - Basic System. May 16 16:36:40.366262 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 16:36:40.407354 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 16:36:40.416527 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 16:36:40.420479 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 16:36:40.563942 kernel: EXT4-fs (vda9): mounted filesystem 07293137-138a-42a3-a962-d767034e11a7 r/w with ordered data mode. Quota mode: none. May 16 16:36:40.564095 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 16:36:40.565441 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 16:36:40.568310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:36:40.570046 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 16:36:40.570335 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 16:36:40.570372 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 16:36:40.570394 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:36:40.597648 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 16:36:40.600549 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 16:36:40.604051 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (888) May 16 16:36:40.606704 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:40.606807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:40.606835 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:36:40.611841 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:36:40.638358 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 16 16:36:40.642874 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory May 16 16:36:40.648035 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory May 16 16:36:40.652653 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory May 16 16:36:40.741409 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 16:36:40.743548 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 16:36:40.745695 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 16:36:40.764959 kernel: BTRFS info (device vda6): last unmount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:40.777064 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 16:36:40.793078 ignition[1002]: INFO : Ignition 2.21.0 May 16 16:36:40.794334 ignition[1002]: INFO : Stage: mount May 16 16:36:40.795076 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:36:40.795076 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:40.797312 ignition[1002]: INFO : mount: mount passed May 16 16:36:40.797312 ignition[1002]: INFO : Ignition finished successfully May 16 16:36:40.798720 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 16:36:40.801022 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 16:36:41.052803 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 16:36:41.054451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:36:41.080324 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (1015) May 16 16:36:41.080361 kernel: BTRFS info (device vda6): first mount of filesystem 97ba3731-2b30-4c65-8762-24a0a058313d May 16 16:36:41.080373 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 16:36:41.081191 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:36:41.085185 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:36:41.114682 ignition[1032]: INFO : Ignition 2.21.0 May 16 16:36:41.114682 ignition[1032]: INFO : Stage: files May 16 16:36:41.116662 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:36:41.116662 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:41.116662 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping May 16 16:36:41.116662 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 16:36:41.116662 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 16:36:41.123027 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 16:36:41.123027 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 16:36:41.123027 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 16:36:41.123027 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 16:36:41.123027 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 16 16:36:41.118849 unknown[1032]: wrote ssh authorized keys file for user: core May 16 16:36:41.187110 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 16:36:41.504421 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 16:36:41.504421 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 16:36:41.508178 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 16:36:41.508178 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 16:36:41.511784 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 16:36:41.513625 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:36:41.515615 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:36:41.517381 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:36:41.519364 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:36:41.524526 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:36:41.526632 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:36:41.528670 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:36:41.531652 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:36:41.531652 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:36:41.531652 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 16 16:36:41.688116 systemd-networkd[857]: eth0: Gained IPv6LL May 16 16:36:42.574700 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 16:36:43.046656 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 16:36:43.046656 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 16:36:43.050249 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:36:43.257865 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:36:43.257865 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 16:36:43.257865 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 16:36:43.262660 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:36:43.262660 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:36:43.262660 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 16:36:43.262660 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 16:36:43.286081 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:36:43.289628 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:36:43.291893 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 16:36:43.291893 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 16:36:43.295284 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 16:36:43.295284 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 16:36:43.295284 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 16:36:43.295284 ignition[1032]: INFO : files: files passed May 16 16:36:43.295284 ignition[1032]: INFO : Ignition finished successfully May 16 16:36:43.305441 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 16:36:43.309426 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 16:36:43.311542 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 16:36:43.337688 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory May 16 16:36:43.339799 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 16:36:43.339951 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 16:36:43.343796 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:36:43.343796 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 16:36:43.348743 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:36:43.344603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:36:43.346170 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 16:36:43.351012 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 16:36:43.409650 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 16:36:43.409794 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 16:36:43.412964 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 16:36:43.414338 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 16:36:43.415465 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 16:36:43.416493 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 16:36:43.445856 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:36:43.449819 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 16:36:43.473755 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 16:36:43.476411 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:36:43.476560 systemd[1]: Stopped target timers.target - Timer Units. May 16 16:36:43.479152 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 16:36:43.479260 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:36:43.541718 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 16:36:43.543049 systemd[1]: Stopped target basic.target - Basic System. May 16 16:36:43.546316 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 16:36:43.548548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:36:43.549805 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 16:36:43.552427 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 16:36:43.554847 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 16:36:43.557317 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:36:43.562095 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 16:36:43.562283 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 16:36:43.566676 systemd[1]: Stopped target swap.target - Swaps. May 16 16:36:43.566815 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 16:36:43.567019 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 16:36:43.571600 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 16:36:43.571784 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:36:43.575815 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 16:36:43.576831 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:36:43.579316 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 16:36:43.579477 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 16:36:43.582423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 16:36:43.582577 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:36:43.585716 systemd[1]: Stopped target paths.target - Path Units. May 16 16:36:43.585842 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 16:36:43.591015 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:36:43.591217 systemd[1]: Stopped target slices.target - Slice Units. May 16 16:36:43.593746 systemd[1]: Stopped target sockets.target - Socket Units. May 16 16:36:43.595412 systemd[1]: iscsid.socket: Deactivated successfully. May 16 16:36:43.595534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:36:43.598769 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 16:36:43.598896 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:36:43.601361 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 16:36:43.601523 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:36:43.604412 systemd[1]: ignition-files.service: Deactivated successfully. May 16 16:36:43.604557 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 16:36:43.606696 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 16:36:43.607302 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 16:36:43.607466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:36:43.619232 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 16:36:43.620185 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 16:36:43.620312 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:36:43.621935 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 16:36:43.622225 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:36:43.631776 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 16:36:43.635015 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 16:36:43.678289 ignition[1088]: INFO : Ignition 2.21.0 May 16 16:36:43.678289 ignition[1088]: INFO : Stage: umount May 16 16:36:43.678289 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:36:43.678289 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:36:43.678289 ignition[1088]: INFO : umount: umount passed May 16 16:36:43.678289 ignition[1088]: INFO : Ignition finished successfully May 16 16:36:43.676411 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 16:36:43.676528 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 16:36:43.679333 systemd[1]: Stopped target network.target - Network. May 16 16:36:43.680871 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 16:36:43.681023 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 16:36:43.683885 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 16:36:43.683949 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 16:36:43.686450 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 16:36:43.686508 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 16:36:43.688274 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 16:36:43.688320 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 16:36:43.690520 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 16:36:43.692426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 16:36:43.695566 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 16:36:43.700171 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 16:36:43.700335 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 16:36:43.705455 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 16:36:43.705728 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 16:36:43.705791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:36:43.709618 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 16:36:43.765668 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 16:36:43.765848 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 16:36:43.769275 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 16:36:43.769482 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 16:36:43.769615 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 16:36:43.769662 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 16:36:43.775034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 16:36:43.776185 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 16:36:43.776248 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:36:43.776604 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:36:43.776657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:36:43.782067 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 16:36:43.782130 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 16:36:43.785198 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:36:43.790189 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:36:43.803284 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 16:36:43.803437 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 16:36:43.809586 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 16:36:43.809762 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:36:43.812091 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 16:36:43.812134 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 16:36:43.814254 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 16:36:43.814288 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:36:43.816309 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 16:36:43.816357 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 16:36:43.820304 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 16:36:43.820356 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 16:36:43.823263 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 16:36:43.823314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:36:43.828948 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 16:36:43.829025 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 16:36:43.829082 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:36:43.834090 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 16:36:43.834155 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:36:43.838881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:36:43.838965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:43.861022 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 16:36:43.861139 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 16:36:44.534441 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 16:36:44.534583 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 16:36:44.537651 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 16:36:44.539706 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 16:36:44.540688 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 16:36:44.543578 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 16:36:44.567493 systemd[1]: Switching root. May 16 16:36:44.606214 systemd-journald[220]: Journal stopped May 16 16:36:46.407708 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). May 16 16:36:46.407794 kernel: SELinux: policy capability network_peer_controls=1 May 16 16:36:46.407811 kernel: SELinux: policy capability open_perms=1 May 16 16:36:46.407825 kernel: SELinux: policy capability extended_socket_class=1 May 16 16:36:46.407839 kernel: SELinux: policy capability always_check_network=0 May 16 16:36:46.407853 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 16:36:46.407867 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 16:36:46.407891 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 16:36:46.407930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 16:36:46.407950 kernel: SELinux: policy capability userspace_initial_context=0 May 16 16:36:46.407965 kernel: audit: type=1403 audit(1747413405.591:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 16:36:46.407981 systemd[1]: Successfully loaded SELinux policy in 55.219ms. May 16 16:36:46.408024 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 21.025ms. May 16 16:36:46.408042 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:36:46.408059 systemd[1]: Detected virtualization kvm. May 16 16:36:46.408088 systemd[1]: Detected architecture x86-64. May 16 16:36:46.408110 systemd[1]: Detected first boot. May 16 16:36:46.408131 systemd[1]: Initializing machine ID from VM UUID. May 16 16:36:46.408155 zram_generator::config[1134]: No configuration found. May 16 16:36:46.408176 kernel: Guest personality initialized and is inactive May 16 16:36:46.408192 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 16:36:46.408215 kernel: Initialized host personality May 16 16:36:46.408230 kernel: NET: Registered PF_VSOCK protocol family May 16 16:36:46.408243 systemd[1]: Populated /etc with preset unit settings. May 16 16:36:46.408261 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 16:36:46.408275 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 16:36:46.408289 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 16:36:46.408304 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 16:36:46.408318 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 16:36:46.408332 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 16:36:46.408346 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 16:36:46.408365 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 16:36:46.408381 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 16:36:46.408395 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 16:36:46.408409 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 16:36:46.408423 systemd[1]: Created slice user.slice - User and Session Slice. May 16 16:36:46.408437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:36:46.408451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:36:46.408464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 16:36:46.408478 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 16:36:46.408492 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 16:36:46.408509 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:36:46.408522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 16:36:46.408536 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:36:46.408550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:36:46.408563 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 16:36:46.408577 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 16:36:46.408591 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 16:36:46.408605 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 16:36:46.408621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:36:46.408635 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:36:46.408649 systemd[1]: Reached target slices.target - Slice Units. May 16 16:36:46.408664 systemd[1]: Reached target swap.target - Swaps. May 16 16:36:46.408677 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 16:36:46.408691 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 16:36:46.408737 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 16:36:46.408753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:36:46.408768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:36:46.408784 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:36:46.408798 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 16:36:46.408811 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 16:36:46.408825 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 16:36:46.408839 systemd[1]: Mounting media.mount - External Media Directory... May 16 16:36:46.408854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:46.408868 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 16:36:46.408882 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 16:36:46.408896 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 16:36:46.408939 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 16:36:46.408955 systemd[1]: Reached target machines.target - Containers. May 16 16:36:46.408969 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 16:36:46.408984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:36:46.408997 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:36:46.409011 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 16:36:46.409025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:36:46.409039 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:36:46.409057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:36:46.409074 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 16:36:46.409090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:36:46.409106 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 16:36:46.409122 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 16:36:46.409138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 16:36:46.409153 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 16:36:46.409169 systemd[1]: Stopped systemd-fsck-usr.service. May 16 16:36:46.409186 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:36:46.409205 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:36:46.409221 kernel: loop: module loaded May 16 16:36:46.409236 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:36:46.409251 kernel: fuse: init (API version 7.41) May 16 16:36:46.409267 kernel: ACPI: bus type drm_connector registered May 16 16:36:46.409282 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:36:46.409298 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 16:36:46.409314 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 16:36:46.409333 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:36:46.409348 systemd[1]: verity-setup.service: Deactivated successfully. May 16 16:36:46.409362 systemd[1]: Stopped verity-setup.service. May 16 16:36:46.409380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:46.409391 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 16:36:46.409406 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 16:36:46.409418 systemd[1]: Mounted media.mount - External Media Directory. May 16 16:36:46.409430 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 16:36:46.409444 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 16:36:46.409456 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 16:36:46.409468 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 16:36:46.409482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:36:46.409528 systemd-journald[1205]: Collecting audit messages is disabled. May 16 16:36:46.409559 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 16:36:46.409572 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 16:36:46.409584 systemd-journald[1205]: Journal started May 16 16:36:46.409607 systemd-journald[1205]: Runtime Journal (/run/log/journal/5ba1edda81244ced8bbcf186e2fd56c7) is 6M, max 48.5M, 42.4M free. May 16 16:36:46.144669 systemd[1]: Queued start job for default target multi-user.target. May 16 16:36:46.164012 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 16:36:46.164493 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 16:36:46.412949 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:36:46.415112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:36:46.416746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:36:46.418390 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:36:46.418652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:36:46.420160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:36:46.420432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:36:46.422157 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 16:36:46.422392 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 16:36:46.423757 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:36:46.423984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:36:46.425412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:36:46.426842 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:36:46.428432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 16:36:46.430189 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 16:36:46.446492 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:36:46.449271 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 16:36:46.451594 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 16:36:46.452890 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 16:36:46.453005 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:36:46.455284 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 16:36:46.461534 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 16:36:46.462933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:36:46.465903 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 16:36:46.470367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 16:36:46.472100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:36:46.474394 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 16:36:46.475600 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:36:46.476888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:36:46.481256 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 16:36:46.490663 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 16:36:46.494058 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 16:36:46.495671 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 16:36:46.498307 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:36:46.499730 systemd-journald[1205]: Time spent on flushing to /var/log/journal/5ba1edda81244ced8bbcf186e2fd56c7 is 14.929ms for 1065 entries. May 16 16:36:46.499730 systemd-journald[1205]: System Journal (/var/log/journal/5ba1edda81244ced8bbcf186e2fd56c7) is 8M, max 195.6M, 187.6M free. May 16 16:36:46.524572 systemd-journald[1205]: Received client request to flush runtime journal. May 16 16:36:46.524615 kernel: loop0: detected capacity change from 0 to 221472 May 16 16:36:46.508522 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 16:36:46.511029 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 16:36:46.514370 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 16:36:46.527191 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 16:36:46.539231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:36:46.553078 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 16:36:46.554239 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 16:36:46.563569 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 16:36:46.566782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:36:46.578994 kernel: loop1: detected capacity change from 0 to 146240 May 16 16:36:46.599127 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 16 16:36:46.599147 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 16 16:36:46.605744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:36:46.616945 kernel: loop2: detected capacity change from 0 to 113872 May 16 16:36:46.640956 kernel: loop3: detected capacity change from 0 to 221472 May 16 16:36:46.651952 kernel: loop4: detected capacity change from 0 to 146240 May 16 16:36:46.668948 kernel: loop5: detected capacity change from 0 to 113872 May 16 16:36:46.677767 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 16:36:46.678365 (sd-merge)[1277]: Merged extensions into '/usr'. May 16 16:36:46.683986 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... May 16 16:36:46.684002 systemd[1]: Reloading... May 16 16:36:46.789956 zram_generator::config[1301]: No configuration found. May 16 16:36:46.914579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:36:46.919547 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 16:36:46.997266 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 16:36:46.997652 systemd[1]: Reloading finished in 313 ms. May 16 16:36:47.027477 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 16:36:47.029227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 16:36:47.048637 systemd[1]: Starting ensure-sysext.service... May 16 16:36:47.050807 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:36:47.064282 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... May 16 16:36:47.064298 systemd[1]: Reloading... May 16 16:36:47.076191 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 16:36:47.076225 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 16:36:47.076528 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 16:36:47.076771 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 16:36:47.077816 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 16:36:47.078160 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. May 16 16:36:47.078252 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. May 16 16:36:47.083328 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:36:47.083340 systemd-tmpfiles[1341]: Skipping /boot May 16 16:36:47.099490 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:36:47.099504 systemd-tmpfiles[1341]: Skipping /boot May 16 16:36:47.181944 zram_generator::config[1368]: No configuration found. May 16 16:36:47.278496 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:36:47.359429 systemd[1]: Reloading finished in 294 ms. May 16 16:36:47.381563 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 16:36:47.401015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:36:47.410441 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:36:47.412947 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 16:36:47.415339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 16:36:47.422653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:36:47.426479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:36:47.429586 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 16:36:47.435683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:47.435852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:36:47.441941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:36:47.446501 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:36:47.449050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:36:47.450369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:36:47.450469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:36:47.454094 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 16:36:47.455311 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:47.456533 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 16:36:47.458452 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:36:47.458662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:36:47.462706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:36:47.463598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:36:47.482357 systemd-udevd[1412]: Using default interface naming scheme 'v255'. May 16 16:36:47.499841 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 16:36:47.503102 augenrules[1440]: No rules May 16 16:36:47.503642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:36:47.504033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:36:47.506267 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:36:47.506647 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:36:47.514363 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:47.514634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:36:47.516153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:36:47.519405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:36:47.522798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:36:47.524014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:36:47.524123 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:36:47.546989 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 16:36:47.548116 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:47.549098 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:36:47.551541 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 16:36:47.564443 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 16:36:47.567033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:36:47.567529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:36:47.569323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:36:47.570172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:36:47.572472 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:36:47.572671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:36:47.585784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:47.588370 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:36:47.590109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:36:47.591106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:36:47.593261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:36:47.598663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:36:47.605067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:36:47.607217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:36:47.607268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:36:47.610380 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:36:47.611563 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 16:36:47.611599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 16:36:47.627904 augenrules[1488]: /sbin/augenrules: No change May 16 16:36:47.630792 systemd[1]: Finished ensure-sysext.service. May 16 16:36:47.632106 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 16:36:47.633599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:36:47.633823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:36:47.635295 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:36:47.635502 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:36:47.636884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:36:47.637111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:36:47.638643 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:36:47.639254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:36:47.651117 augenrules[1517]: No rules May 16 16:36:47.652218 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:36:47.656210 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:36:47.668044 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 16:36:47.675575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:36:47.675641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:36:47.679033 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 16:36:47.689982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:36:47.701174 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 16:36:47.704461 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 16 16:36:47.704506 kernel: mousedev: PS/2 mouse device common for all mice May 16 16:36:47.708934 kernel: ACPI: button: Power Button [PWRF] May 16 16:36:47.716395 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 16:36:47.733199 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 16:36:47.734827 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 16:36:47.735117 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 16:36:47.796954 systemd-resolved[1410]: Positive Trust Anchors: May 16 16:36:47.796970 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:36:47.797019 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:36:47.806047 systemd-resolved[1410]: Defaulting to hostname 'linux'. May 16 16:36:47.809028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:36:47.814139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:36:47.886211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:47.893099 systemd-networkd[1494]: lo: Link UP May 16 16:36:47.893121 systemd-networkd[1494]: lo: Gained carrier May 16 16:36:47.896199 systemd-networkd[1494]: Enumeration completed May 16 16:36:47.896368 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:36:47.896694 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:36:47.896698 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:36:47.897456 systemd-networkd[1494]: eth0: Link UP May 16 16:36:47.897719 systemd-networkd[1494]: eth0: Gained carrier May 16 16:36:47.897743 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:36:47.898628 systemd[1]: Reached target network.target - Network. May 16 16:36:47.905700 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 16:36:47.915960 systemd-networkd[1494]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:36:47.920175 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 16:36:47.927257 kernel: kvm_amd: TSC scaling supported May 16 16:36:47.927291 kernel: kvm_amd: Nested Virtualization enabled May 16 16:36:47.927304 kernel: kvm_amd: Nested Paging enabled May 16 16:36:47.927316 kernel: kvm_amd: LBR virtualization supported May 16 16:36:47.928347 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 16:36:47.928369 kernel: kvm_amd: Virtual GIF supported May 16 16:36:47.932375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:36:47.950508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:47.956747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:36:47.979157 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 16:36:47.984722 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 16:36:49.788211 systemd-timesyncd[1532]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 16:36:49.788885 systemd-timesyncd[1532]: Initial clock synchronization to Fri 2025-05-16 16:36:49.788128 UTC. May 16 16:36:49.789345 systemd-resolved[1410]: Clock change detected. Flushing caches. May 16 16:36:49.789583 systemd[1]: Reached target time-set.target - System Time Set. May 16 16:36:49.810094 kernel: EDAC MC: Ver: 3.0.0 May 16 16:36:49.855388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:36:49.857011 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:36:49.858230 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 16:36:49.859529 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 16:36:49.860776 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 16 16:36:49.862086 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 16:36:49.863357 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 16:36:49.864630 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 16:36:49.865895 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 16:36:49.865937 systemd[1]: Reached target paths.target - Path Units. May 16 16:36:49.866997 systemd[1]: Reached target timers.target - Timer Units. May 16 16:36:49.869100 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 16:36:49.872060 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 16:36:49.875801 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 16:36:49.877203 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 16:36:49.878439 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 16:36:49.882002 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 16:36:49.883426 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 16:36:49.885165 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 16:36:49.886916 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:36:49.887878 systemd[1]: Reached target basic.target - Basic System. May 16 16:36:49.888868 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 16:36:49.888921 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 16:36:49.889982 systemd[1]: Starting containerd.service - containerd container runtime... May 16 16:36:49.892424 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 16:36:49.895478 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 16:36:49.898424 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 16:36:49.901439 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 16:36:49.902511 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 16:36:49.903637 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 16 16:36:49.907400 jq[1572]: false May 16 16:36:49.907405 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 16:36:49.910091 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 16:36:49.912456 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 16:36:49.914145 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing passwd entry cache May 16 16:36:49.913787 oslogin_cache_refresh[1574]: Refreshing passwd entry cache May 16 16:36:49.916163 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 16:36:49.921570 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 16:36:49.925269 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 16:36:49.925309 oslogin_cache_refresh[1574]: Failure getting users, quitting May 16 16:36:49.925875 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting users, quitting May 16 16:36:49.925875 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:36:49.925875 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing group entry cache May 16 16:36:49.925327 oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 16 16:36:49.925976 extend-filesystems[1573]: Found loop3 May 16 16:36:49.925976 extend-filesystems[1573]: Found loop4 May 16 16:36:49.925378 oslogin_cache_refresh[1574]: Refreshing group entry cache May 16 16:36:49.927665 extend-filesystems[1573]: Found loop5 May 16 16:36:49.927665 extend-filesystems[1573]: Found sr0 May 16 16:36:49.927665 extend-filesystems[1573]: Found vda May 16 16:36:49.927665 extend-filesystems[1573]: Found vda1 May 16 16:36:49.927665 extend-filesystems[1573]: Found vda2 May 16 16:36:49.927665 extend-filesystems[1573]: Found vda3 May 16 16:36:49.927665 extend-filesystems[1573]: Found usr May 16 16:36:49.927665 extend-filesystems[1573]: Found vda4 May 16 16:36:49.927665 extend-filesystems[1573]: Found vda6 May 16 16:36:49.927665 extend-filesystems[1573]: Found vda7 May 16 16:36:49.927665 extend-filesystems[1573]: Found vda9 May 16 16:36:49.927665 extend-filesystems[1573]: Checking size of /dev/vda9 May 16 16:36:49.933329 oslogin_cache_refresh[1574]: Failure getting groups, quitting May 16 16:36:49.938863 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting groups, quitting May 16 16:36:49.938863 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:36:49.932088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 16:36:49.933340 oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 16 16:36:49.934427 systemd[1]: Starting update-engine.service - Update Engine... May 16 16:36:49.937874 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 16:36:49.942238 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 16:36:49.943937 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 16:36:49.944186 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 16:36:49.944513 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 16 16:36:49.944748 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 16 16:36:49.946261 systemd[1]: motdgen.service: Deactivated successfully. May 16 16:36:49.946510 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 16:36:49.947732 extend-filesystems[1573]: Resized partition /dev/vda9 May 16 16:36:49.950927 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 16:36:49.951163 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 16:36:49.951671 extend-filesystems[1595]: resize2fs 1.47.2 (1-Jan-2025) May 16 16:36:49.956356 jq[1589]: true May 16 16:36:49.959427 update_engine[1588]: I20250516 16:36:49.957685 1588 main.cc:92] Flatcar Update Engine starting May 16 16:36:49.965350 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 16:36:49.981327 jq[1601]: true May 16 16:36:49.978926 (ntainerd)[1606]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 16:36:50.055625 tar[1596]: linux-amd64/helm May 16 16:36:50.113602 dbus-daemon[1570]: [system] SELinux support is enabled May 16 16:36:50.113852 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 16:36:50.115416 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) May 16 16:36:50.115441 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 16:36:50.116126 systemd-logind[1582]: New seat seat0. May 16 16:36:50.117546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 16:36:50.117577 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 16:36:50.119628 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 16:36:50.119650 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 16:36:50.124636 systemd[1]: Started systemd-logind.service - User Login Management. May 16 16:36:50.126174 update_engine[1588]: I20250516 16:36:50.126116 1588 update_check_scheduler.cc:74] Next update check in 3m20s May 16 16:36:50.126751 systemd[1]: Started update-engine.service - Update Engine. May 16 16:36:50.131446 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 16:36:50.140328 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 16:36:50.241596 extend-filesystems[1595]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 16:36:50.241596 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 16:36:50.241596 extend-filesystems[1595]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 16:36:50.228951 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 16:36:50.271843 bash[1626]: Updated "/home/core/.ssh/authorized_keys" May 16 16:36:50.271952 extend-filesystems[1573]: Resized filesystem in /dev/vda9 May 16 16:36:50.230013 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 16:36:50.268110 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 16:36:50.276251 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 16:36:50.295405 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 16:36:50.413394 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 16:36:50.453891 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 16:36:50.474827 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 16:36:50.498953 systemd[1]: issuegen.service: Deactivated successfully. May 16 16:36:50.499252 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 16:36:50.502356 containerd[1606]: time="2025-05-16T16:36:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 16:36:50.503859 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 16:36:50.506183 containerd[1606]: time="2025-05-16T16:36:50.506140255Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 16:36:50.517988 containerd[1606]: time="2025-05-16T16:36:50.517946781Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.296µs" May 16 16:36:50.517988 containerd[1606]: time="2025-05-16T16:36:50.517973060Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 16:36:50.517988 containerd[1606]: time="2025-05-16T16:36:50.517990613Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 16:36:50.518206 containerd[1606]: time="2025-05-16T16:36:50.518180529Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 16:36:50.518206 containerd[1606]: time="2025-05-16T16:36:50.518199305Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 16:36:50.518265 containerd[1606]: time="2025-05-16T16:36:50.518232557Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:36:50.518347 containerd[1606]: time="2025-05-16T16:36:50.518320372Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:36:50.518347 containerd[1606]: time="2025-05-16T16:36:50.518337033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:36:50.518611 containerd[1606]: time="2025-05-16T16:36:50.518582914Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:36:50.518611 containerd[1606]: time="2025-05-16T16:36:50.518601108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:36:50.518649 containerd[1606]: time="2025-05-16T16:36:50.518610516Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:36:50.518649 containerd[1606]: time="2025-05-16T16:36:50.518618882Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 16:36:50.518760 containerd[1606]: time="2025-05-16T16:36:50.518736071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 16:36:50.519001 containerd[1606]: time="2025-05-16T16:36:50.518975320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:36:50.519032 containerd[1606]: time="2025-05-16T16:36:50.519007571Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:36:50.519032 containerd[1606]: time="2025-05-16T16:36:50.519017229Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 16:36:50.519070 containerd[1606]: time="2025-05-16T16:36:50.519060219Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 16:36:50.520614 containerd[1606]: time="2025-05-16T16:36:50.520574049Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 16:36:50.520713 containerd[1606]: time="2025-05-16T16:36:50.520665901Z" level=info msg="metadata content store policy set" policy=shared May 16 16:36:50.527140 containerd[1606]: time="2025-05-16T16:36:50.527098787Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 16:36:50.527182 containerd[1606]: time="2025-05-16T16:36:50.527147379Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 16:36:50.527182 containerd[1606]: time="2025-05-16T16:36:50.527161215Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 16:36:50.527182 containerd[1606]: time="2025-05-16T16:36:50.527172536Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 16:36:50.527266 containerd[1606]: time="2025-05-16T16:36:50.527184368Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 16:36:50.527266 containerd[1606]: time="2025-05-16T16:36:50.527252325Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 16:36:50.527266 containerd[1606]: time="2025-05-16T16:36:50.527264829Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 16:36:50.527335 containerd[1606]: time="2025-05-16T16:36:50.527306156Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 16:36:50.527335 containerd[1606]: time="2025-05-16T16:36:50.527319341Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 16:36:50.527335 containerd[1606]: time="2025-05-16T16:36:50.527329170Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 16:36:50.527399 containerd[1606]: time="2025-05-16T16:36:50.527340371Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 16:36:50.527399 containerd[1606]: time="2025-05-16T16:36:50.527352373Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 16:36:50.527501 containerd[1606]: time="2025-05-16T16:36:50.527469463Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 16:36:50.527501 containerd[1606]: time="2025-05-16T16:36:50.527493187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 16:36:50.527541 containerd[1606]: time="2025-05-16T16:36:50.527505380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 16:36:50.527541 containerd[1606]: time="2025-05-16T16:36:50.527518765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 16:36:50.527541 containerd[1606]: time="2025-05-16T16:36:50.527528954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 16:36:50.527541 containerd[1606]: time="2025-05-16T16:36:50.527538482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 16:36:50.527623 containerd[1606]: time="2025-05-16T16:36:50.527548822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 16:36:50.527623 containerd[1606]: time="2025-05-16T16:36:50.527559171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 16:36:50.527623 containerd[1606]: time="2025-05-16T16:36:50.527582916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 16:36:50.527623 containerd[1606]: time="2025-05-16T16:36:50.527594577Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 16:36:50.527623 containerd[1606]: time="2025-05-16T16:36:50.527613653Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 16:36:50.527716 containerd[1606]: time="2025-05-16T16:36:50.527676742Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 16:36:50.527716 containerd[1606]: time="2025-05-16T16:36:50.527692040Z" level=info msg="Start snapshots syncer" May 16 16:36:50.527758 containerd[1606]: time="2025-05-16T16:36:50.527734189Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 16:36:50.528114 containerd[1606]: time="2025-05-16T16:36:50.528021939Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 16:36:50.528292 containerd[1606]: time="2025-05-16T16:36:50.528114052Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 16:36:50.566040 containerd[1606]: time="2025-05-16T16:36:50.565933971Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 16:36:50.566306 containerd[1606]: time="2025-05-16T16:36:50.566242380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 16:36:50.566557 containerd[1606]: time="2025-05-16T16:36:50.566520191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 16:36:50.566557 containerd[1606]: time="2025-05-16T16:36:50.566538836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 16:36:50.566557 containerd[1606]: time="2025-05-16T16:36:50.566552602Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 16:36:50.566650 containerd[1606]: time="2025-05-16T16:36:50.566568762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 16:36:50.566650 containerd[1606]: time="2025-05-16T16:36:50.566581496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 16:36:50.566650 containerd[1606]: time="2025-05-16T16:36:50.566608727Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 16:36:50.566729 containerd[1606]: time="2025-05-16T16:36:50.566662107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 16:36:50.566729 containerd[1606]: time="2025-05-16T16:36:50.566675642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 16:36:50.566729 containerd[1606]: time="2025-05-16T16:36:50.566685381Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 16:36:50.568749 containerd[1606]: time="2025-05-16T16:36:50.568709327Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:36:50.568870 containerd[1606]: time="2025-05-16T16:36:50.568741688Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:36:50.568870 containerd[1606]: time="2025-05-16T16:36:50.568855401Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:36:50.568870 containerd[1606]: time="2025-05-16T16:36:50.568869187Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:36:50.568944 containerd[1606]: time="2025-05-16T16:36:50.568877693Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 16:36:50.568944 containerd[1606]: time="2025-05-16T16:36:50.568887331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 16:36:50.568944 containerd[1606]: time="2025-05-16T16:36:50.568902149Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 16:36:50.568944 containerd[1606]: time="2025-05-16T16:36:50.568928368Z" level=info msg="runtime interface created" May 16 16:36:50.568944 containerd[1606]: time="2025-05-16T16:36:50.568933517Z" level=info msg="created NRI interface" May 16 16:36:50.568944 containerd[1606]: time="2025-05-16T16:36:50.568941032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 16:36:50.569060 containerd[1606]: time="2025-05-16T16:36:50.568983802Z" level=info msg="Connect containerd service" May 16 16:36:50.569060 containerd[1606]: time="2025-05-16T16:36:50.569021653Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 16:36:50.571550 containerd[1606]: time="2025-05-16T16:36:50.571498719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:36:50.572564 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 16:36:50.575932 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 16:36:50.578547 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 16:36:50.580867 systemd[1]: Reached target getty.target - Login Prompts. May 16 16:36:50.621262 tar[1596]: linux-amd64/LICENSE May 16 16:36:50.621544 tar[1596]: linux-amd64/README.md May 16 16:36:50.667474 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 16:36:50.736588 containerd[1606]: time="2025-05-16T16:36:50.736527316Z" level=info msg="Start subscribing containerd event" May 16 16:36:50.736714 containerd[1606]: time="2025-05-16T16:36:50.736617665Z" level=info msg="Start recovering state" May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736752087Z" level=info msg="Start event monitor" May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736773557Z" level=info msg="Start cni network conf syncer for default" May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736795939Z" level=info msg="Start streaming server" May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736809505Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736816869Z" level=info msg="runtime interface starting up..." May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736825254Z" level=info msg="starting plugins..." May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736843178Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736857725Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 16:36:50.736955 containerd[1606]: time="2025-05-16T16:36:50.736954296Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 16:36:50.737134 containerd[1606]: time="2025-05-16T16:36:50.737057059Z" level=info msg="containerd successfully booted in 0.237137s" May 16 16:36:50.737235 systemd[1]: Started containerd.service - containerd container runtime. May 16 16:36:51.490580 systemd-networkd[1494]: eth0: Gained IPv6LL May 16 16:36:51.494635 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 16:36:51.496592 systemd[1]: Reached target network-online.target - Network is Online. May 16 16:36:51.499431 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 16:36:51.501943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:36:51.504168 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 16:36:51.529625 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 16:36:51.538775 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 16:36:51.539080 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 16:36:51.540737 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 16:36:52.793897 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 16:36:52.796419 systemd[1]: Started sshd@0-10.0.0.37:22-10.0.0.1:37074.service - OpenSSH per-connection server daemon (10.0.0.1:37074). May 16 16:36:52.852819 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:36:52.854742 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:36:52.861258 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 16:36:52.863560 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 16:36:52.871298 systemd-logind[1582]: New session 1 of user core. May 16 16:36:52.885650 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 16:36:52.890597 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 16:36:52.939778 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 16:36:52.942182 systemd-logind[1582]: New session c1 of user core. May 16 16:36:52.969480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:36:52.971124 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 16:36:52.973991 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:36:53.085558 systemd[1701]: Queued start job for default target default.target. May 16 16:36:53.104511 systemd[1701]: Created slice app.slice - User Application Slice. May 16 16:36:53.104535 systemd[1701]: Reached target paths.target - Paths. May 16 16:36:53.104575 systemd[1701]: Reached target timers.target - Timers. May 16 16:36:53.106141 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 16:36:53.120472 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 16:36:53.120598 systemd[1701]: Reached target sockets.target - Sockets. May 16 16:36:53.120639 systemd[1701]: Reached target basic.target - Basic System. May 16 16:36:53.120678 systemd[1701]: Reached target default.target - Main User Target. May 16 16:36:53.120712 systemd[1701]: Startup finished in 169ms. May 16 16:36:53.121188 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 16:36:53.123794 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 16:36:53.125394 systemd[1]: Startup finished in 3.538s (kernel) + 7.938s (initrd) + 5.784s (userspace) = 17.261s. May 16 16:36:53.188688 systemd[1]: Started sshd@1-10.0.0.37:22-10.0.0.1:37090.service - OpenSSH per-connection server daemon (10.0.0.1:37090). May 16 16:36:53.240113 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 37090 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:36:53.241870 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:36:53.247340 systemd-logind[1582]: New session 2 of user core. May 16 16:36:53.253441 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 16:36:53.308776 sshd[1729]: Connection closed by 10.0.0.1 port 37090 May 16 16:36:53.309222 sshd-session[1727]: pam_unix(sshd:session): session closed for user core May 16 16:36:53.317877 systemd[1]: sshd@1-10.0.0.37:22-10.0.0.1:37090.service: Deactivated successfully. May 16 16:36:53.319801 systemd[1]: session-2.scope: Deactivated successfully. May 16 16:36:53.320500 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. May 16 16:36:53.323677 systemd[1]: Started sshd@2-10.0.0.37:22-10.0.0.1:37092.service - OpenSSH per-connection server daemon (10.0.0.1:37092). May 16 16:36:53.324573 systemd-logind[1582]: Removed session 2. May 16 16:36:53.364031 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 37092 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:36:53.365687 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:36:53.370128 systemd-logind[1582]: New session 3 of user core. May 16 16:36:53.379414 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 16:36:53.428669 sshd[1738]: Connection closed by 10.0.0.1 port 37092 May 16 16:36:53.429050 sshd-session[1736]: pam_unix(sshd:session): session closed for user core May 16 16:36:53.440074 systemd[1]: sshd@2-10.0.0.37:22-10.0.0.1:37092.service: Deactivated successfully. May 16 16:36:53.442734 systemd[1]: session-3.scope: Deactivated successfully. May 16 16:36:53.443603 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. May 16 16:36:53.447459 systemd[1]: Started sshd@3-10.0.0.37:22-10.0.0.1:37098.service - OpenSSH per-connection server daemon (10.0.0.1:37098). May 16 16:36:53.448059 systemd-logind[1582]: Removed session 3. May 16 16:36:53.492608 kubelet[1710]: E0516 16:36:53.492562 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:36:53.494792 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 37098 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:36:53.496568 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:36:53.496809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:36:53.496990 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:36:53.497388 systemd[1]: kubelet.service: Consumed 1.811s CPU time, 266.6M memory peak. May 16 16:36:53.501677 systemd-logind[1582]: New session 4 of user core. May 16 16:36:53.514396 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 16:36:53.567500 sshd[1747]: Connection closed by 10.0.0.1 port 37098 May 16 16:36:53.567868 sshd-session[1744]: pam_unix(sshd:session): session closed for user core May 16 16:36:53.583664 systemd[1]: sshd@3-10.0.0.37:22-10.0.0.1:37098.service: Deactivated successfully. May 16 16:36:53.585372 systemd[1]: session-4.scope: Deactivated successfully. May 16 16:36:53.586014 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. May 16 16:36:53.588648 systemd[1]: Started sshd@4-10.0.0.37:22-10.0.0.1:51884.service - OpenSSH per-connection server daemon (10.0.0.1:51884). May 16 16:36:53.589157 systemd-logind[1582]: Removed session 4. May 16 16:36:53.635098 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 51884 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:36:53.636621 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:36:53.640850 systemd-logind[1582]: New session 5 of user core. May 16 16:36:53.650397 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 16:36:53.707709 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 16:36:53.708026 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:36:53.726404 sudo[1756]: pam_unix(sudo:session): session closed for user root May 16 16:36:53.727751 sshd[1755]: Connection closed by 10.0.0.1 port 51884 May 16 16:36:53.728119 sshd-session[1753]: pam_unix(sshd:session): session closed for user core May 16 16:36:53.741197 systemd[1]: sshd@4-10.0.0.37:22-10.0.0.1:51884.service: Deactivated successfully. May 16 16:36:53.742974 systemd[1]: session-5.scope: Deactivated successfully. May 16 16:36:53.743682 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. May 16 16:36:53.746665 systemd[1]: Started sshd@5-10.0.0.37:22-10.0.0.1:51888.service - OpenSSH per-connection server daemon (10.0.0.1:51888). May 16 16:36:53.747236 systemd-logind[1582]: Removed session 5. May 16 16:36:53.961013 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 51888 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:36:53.962964 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:36:53.967797 systemd-logind[1582]: New session 6 of user core. May 16 16:36:53.977423 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 16:36:54.031514 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 16:36:54.031830 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:36:54.039098 sudo[1766]: pam_unix(sudo:session): session closed for user root May 16 16:36:54.046172 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 16:36:54.046509 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:36:54.057417 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:36:54.114929 augenrules[1788]: No rules May 16 16:36:54.116854 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:36:54.117135 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:36:54.118366 sudo[1765]: pam_unix(sudo:session): session closed for user root May 16 16:36:54.119945 sshd[1764]: Connection closed by 10.0.0.1 port 51888 May 16 16:36:54.120341 sshd-session[1762]: pam_unix(sshd:session): session closed for user core May 16 16:36:54.132659 systemd[1]: sshd@5-10.0.0.37:22-10.0.0.1:51888.service: Deactivated successfully. May 16 16:36:54.134456 systemd[1]: session-6.scope: Deactivated successfully. May 16 16:36:54.135258 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. May 16 16:36:54.142525 systemd[1]: Started sshd@6-10.0.0.37:22-10.0.0.1:51898.service - OpenSSH per-connection server daemon (10.0.0.1:51898). May 16 16:36:54.143555 systemd-logind[1582]: Removed session 6. May 16 16:36:54.190146 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 51898 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:36:54.191721 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:36:54.196028 systemd-logind[1582]: New session 7 of user core. May 16 16:36:54.205419 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 16:36:54.259242 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 16:36:54.259625 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:36:54.772713 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 16:36:54.791608 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 16:36:55.387488 dockerd[1820]: time="2025-05-16T16:36:55.387410854Z" level=info msg="Starting up" May 16 16:36:55.388799 dockerd[1820]: time="2025-05-16T16:36:55.388762098Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 16:36:56.708046 dockerd[1820]: time="2025-05-16T16:36:56.707908289Z" level=info msg="Loading containers: start." May 16 16:36:56.738310 kernel: Initializing XFRM netlink socket May 16 16:36:57.144214 systemd-networkd[1494]: docker0: Link UP May 16 16:36:57.233331 dockerd[1820]: time="2025-05-16T16:36:57.233263977Z" level=info msg="Loading containers: done." May 16 16:36:57.292440 dockerd[1820]: time="2025-05-16T16:36:57.292394669Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 16:36:57.292597 dockerd[1820]: time="2025-05-16T16:36:57.292506659Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 16:36:57.292649 dockerd[1820]: time="2025-05-16T16:36:57.292635130Z" level=info msg="Initializing buildkit" May 16 16:36:57.461127 dockerd[1820]: time="2025-05-16T16:36:57.461004433Z" level=info msg="Completed buildkit initialization" May 16 16:36:57.467663 dockerd[1820]: time="2025-05-16T16:36:57.467625121Z" level=info msg="Daemon has completed initialization" May 16 16:36:57.467769 dockerd[1820]: time="2025-05-16T16:36:57.467708397Z" level=info msg="API listen on /run/docker.sock" May 16 16:36:57.467884 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 16:36:58.692226 containerd[1606]: time="2025-05-16T16:36:58.692151132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 16:36:59.325595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083423853.mount: Deactivated successfully. May 16 16:37:00.477126 containerd[1606]: time="2025-05-16T16:37:00.477052437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:00.477777 containerd[1606]: time="2025-05-16T16:37:00.477697597Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 16 16:37:00.478951 containerd[1606]: time="2025-05-16T16:37:00.478919579Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:00.481131 containerd[1606]: time="2025-05-16T16:37:00.481080161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:00.481976 containerd[1606]: time="2025-05-16T16:37:00.481940836Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 1.789733168s" May 16 16:37:00.482029 containerd[1606]: time="2025-05-16T16:37:00.481981482Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 16 16:37:00.482672 containerd[1606]: time="2025-05-16T16:37:00.482643955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 16:37:01.657156 containerd[1606]: time="2025-05-16T16:37:01.657093406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:01.657864 containerd[1606]: time="2025-05-16T16:37:01.657803248Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 16 16:37:01.659074 containerd[1606]: time="2025-05-16T16:37:01.659042502Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:01.661470 containerd[1606]: time="2025-05-16T16:37:01.661449317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:01.662299 containerd[1606]: time="2025-05-16T16:37:01.662246772Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.179574084s" May 16 16:37:01.662354 containerd[1606]: time="2025-05-16T16:37:01.662322454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 16 16:37:01.662941 containerd[1606]: time="2025-05-16T16:37:01.662908143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 16:37:02.947582 containerd[1606]: time="2025-05-16T16:37:02.947521019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:02.948311 containerd[1606]: time="2025-05-16T16:37:02.948256058Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 16 16:37:02.949512 containerd[1606]: time="2025-05-16T16:37:02.949448314Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:02.952400 containerd[1606]: time="2025-05-16T16:37:02.952355067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:02.953302 containerd[1606]: time="2025-05-16T16:37:02.953262339Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.290311876s" May 16 16:37:02.953365 containerd[1606]: time="2025-05-16T16:37:02.953305570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 16 16:37:02.953811 containerd[1606]: time="2025-05-16T16:37:02.953760543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 16:37:03.591621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 16:37:03.593016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:03.829430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:03.833536 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:37:03.898140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508096701.mount: Deactivated successfully. May 16 16:37:03.913670 kubelet[2106]: E0516 16:37:03.913615 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:37:03.919142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:37:03.919337 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:37:03.919671 systemd[1]: kubelet.service: Consumed 265ms CPU time, 110.7M memory peak. May 16 16:37:04.503444 containerd[1606]: time="2025-05-16T16:37:04.503391047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:04.504758 containerd[1606]: time="2025-05-16T16:37:04.504727774Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 16 16:37:04.506165 containerd[1606]: time="2025-05-16T16:37:04.506104507Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:04.508198 containerd[1606]: time="2025-05-16T16:37:04.508167316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:04.508713 containerd[1606]: time="2025-05-16T16:37:04.508667674Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.554878587s" May 16 16:37:04.508749 containerd[1606]: time="2025-05-16T16:37:04.508713881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 16 16:37:04.509193 containerd[1606]: time="2025-05-16T16:37:04.509173784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 16:37:05.078884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764525620.mount: Deactivated successfully. May 16 16:37:05.709272 containerd[1606]: time="2025-05-16T16:37:05.709214317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:05.709894 containerd[1606]: time="2025-05-16T16:37:05.709832306Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 16:37:05.711042 containerd[1606]: time="2025-05-16T16:37:05.710994015Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:05.713405 containerd[1606]: time="2025-05-16T16:37:05.713361125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:05.714339 containerd[1606]: time="2025-05-16T16:37:05.714307470Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.205107647s" May 16 16:37:05.714377 containerd[1606]: time="2025-05-16T16:37:05.714340602Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 16:37:05.714844 containerd[1606]: time="2025-05-16T16:37:05.714809011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 16:37:06.195395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048002174.mount: Deactivated successfully. May 16 16:37:06.200518 containerd[1606]: time="2025-05-16T16:37:06.200473028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:37:06.201207 containerd[1606]: time="2025-05-16T16:37:06.201179212Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 16:37:06.202435 containerd[1606]: time="2025-05-16T16:37:06.202402667Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:37:06.204243 containerd[1606]: time="2025-05-16T16:37:06.204194548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:37:06.204803 containerd[1606]: time="2025-05-16T16:37:06.204762233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 489.915572ms" May 16 16:37:06.204803 containerd[1606]: time="2025-05-16T16:37:06.204798381Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 16:37:06.205345 containerd[1606]: time="2025-05-16T16:37:06.205257612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 16:37:06.729365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095459864.mount: Deactivated successfully. May 16 16:37:08.264583 containerd[1606]: time="2025-05-16T16:37:08.264504330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:08.265309 containerd[1606]: time="2025-05-16T16:37:08.265247344Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 16 16:37:08.267089 containerd[1606]: time="2025-05-16T16:37:08.267033154Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:08.269909 containerd[1606]: time="2025-05-16T16:37:08.269856900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:08.270843 containerd[1606]: time="2025-05-16T16:37:08.270806712Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.065498285s" May 16 16:37:08.270883 containerd[1606]: time="2025-05-16T16:37:08.270843501Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 16 16:37:10.746780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:10.746969 systemd[1]: kubelet.service: Consumed 265ms CPU time, 110.7M memory peak. May 16 16:37:10.749407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:10.776035 systemd[1]: Reload requested from client PID 2260 ('systemctl') (unit session-7.scope)... May 16 16:37:10.776054 systemd[1]: Reloading... May 16 16:37:10.862392 zram_generator::config[2304]: No configuration found. May 16 16:37:10.966960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:37:11.084344 systemd[1]: Reloading finished in 307 ms. May 16 16:37:11.148914 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 16:37:11.149022 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 16:37:11.149362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:11.149408 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.3M memory peak. May 16 16:37:11.153618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:11.366705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:11.383598 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:37:11.424763 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:11.424763 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 16:37:11.424763 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:11.425183 kubelet[2351]: I0516 16:37:11.424812 2351 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:37:11.634002 kubelet[2351]: I0516 16:37:11.633861 2351 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 16:37:11.634002 kubelet[2351]: I0516 16:37:11.633899 2351 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:37:11.634420 kubelet[2351]: I0516 16:37:11.634394 2351 server.go:934] "Client rotation is on, will bootstrap in background" May 16 16:37:11.660887 kubelet[2351]: E0516 16:37:11.660823 2351 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:11.662942 kubelet[2351]: I0516 16:37:11.662900 2351 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:37:11.669192 kubelet[2351]: I0516 16:37:11.669170 2351 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:37:11.675136 kubelet[2351]: I0516 16:37:11.675115 2351 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:37:11.675260 kubelet[2351]: I0516 16:37:11.675247 2351 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 16:37:11.675445 kubelet[2351]: I0516 16:37:11.675416 2351 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:37:11.675619 kubelet[2351]: I0516 16:37:11.675445 2351 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:37:11.675736 kubelet[2351]: I0516 16:37:11.675643 2351 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:37:11.675736 kubelet[2351]: I0516 16:37:11.675652 2351 container_manager_linux.go:300] "Creating device plugin manager" May 16 16:37:11.675799 kubelet[2351]: I0516 16:37:11.675788 2351 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:11.677644 kubelet[2351]: I0516 16:37:11.677622 2351 kubelet.go:408] "Attempting to sync node with API server" May 16 16:37:11.677685 kubelet[2351]: I0516 16:37:11.677655 2351 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:37:11.677712 kubelet[2351]: I0516 16:37:11.677708 2351 kubelet.go:314] "Adding apiserver pod source" May 16 16:37:11.677756 kubelet[2351]: I0516 16:37:11.677744 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:37:11.681368 kubelet[2351]: W0516 16:37:11.681273 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused May 16 16:37:11.681368 kubelet[2351]: E0516 16:37:11.681372 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:11.681552 kubelet[2351]: W0516 16:37:11.681444 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused May 16 16:37:11.681552 kubelet[2351]: E0516 16:37:11.681477 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:11.684311 kubelet[2351]: I0516 16:37:11.684266 2351 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:37:11.685029 kubelet[2351]: I0516 16:37:11.684989 2351 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:37:11.685128 kubelet[2351]: W0516 16:37:11.685106 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 16:37:11.687054 kubelet[2351]: I0516 16:37:11.687021 2351 server.go:1274] "Started kubelet" May 16 16:37:11.687339 kubelet[2351]: I0516 16:37:11.687099 2351 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:37:11.688094 kubelet[2351]: I0516 16:37:11.687423 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:37:11.688094 kubelet[2351]: I0516 16:37:11.687794 2351 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:37:11.688234 kubelet[2351]: I0516 16:37:11.688147 2351 server.go:449] "Adding debug handlers to kubelet server" May 16 16:37:11.688829 kubelet[2351]: I0516 16:37:11.688791 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:37:11.691160 kubelet[2351]: I0516 16:37:11.691132 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:37:11.691681 kubelet[2351]: E0516 16:37:11.691652 2351 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:37:11.693253 kubelet[2351]: E0516 16:37:11.692551 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:11.693253 kubelet[2351]: I0516 16:37:11.692591 2351 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 16:37:11.693253 kubelet[2351]: I0516 16:37:11.692780 2351 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 16:37:11.693253 kubelet[2351]: I0516 16:37:11.692839 2351 reconciler.go:26] "Reconciler: start to sync state" May 16 16:37:11.693253 kubelet[2351]: E0516 16:37:11.691822 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18400f4207730bf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 16:37:11.686990837 +0000 UTC m=+0.299456205,LastTimestamp:2025-05-16 16:37:11.686990837 +0000 UTC m=+0.299456205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 16:37:11.693475 kubelet[2351]: I0516 16:37:11.693409 2351 factory.go:221] Registration of the systemd container factory successfully May 16 16:37:11.693509 kubelet[2351]: I0516 16:37:11.693477 2351 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:37:11.693885 kubelet[2351]: E0516 16:37:11.693845 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="200ms" May 16 16:37:11.694177 kubelet[2351]: W0516 16:37:11.694113 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused May 16 16:37:11.694222 kubelet[2351]: E0516 16:37:11.694171 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:11.694917 kubelet[2351]: I0516 16:37:11.694886 2351 factory.go:221] Registration of the containerd container factory successfully May 16 16:37:11.708105 kubelet[2351]: I0516 16:37:11.707927 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:37:11.708105 kubelet[2351]: I0516 16:37:11.708088 2351 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 16:37:11.708105 kubelet[2351]: I0516 16:37:11.708102 2351 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 16:37:11.708255 kubelet[2351]: I0516 16:37:11.708124 2351 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:11.709266 kubelet[2351]: I0516 16:37:11.709225 2351 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:37:11.709266 kubelet[2351]: I0516 16:37:11.709262 2351 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 16:37:11.709586 kubelet[2351]: I0516 16:37:11.709322 2351 kubelet.go:2321] "Starting kubelet main sync loop" May 16 16:37:11.709586 kubelet[2351]: E0516 16:37:11.709361 2351 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:37:11.715359 kubelet[2351]: W0516 16:37:11.715294 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused May 16 16:37:11.715458 kubelet[2351]: E0516 16:37:11.715366 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:11.793076 kubelet[2351]: E0516 16:37:11.793026 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:11.810384 kubelet[2351]: E0516 16:37:11.810335 2351 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 16:37:11.893475 kubelet[2351]: E0516 16:37:11.893331 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:11.894793 kubelet[2351]: E0516 16:37:11.894754 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="400ms" May 16 16:37:11.994061 kubelet[2351]: E0516 16:37:11.994033 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:12.011414 kubelet[2351]: E0516 16:37:12.011341 2351 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 16:37:12.082921 kubelet[2351]: I0516 16:37:12.082866 2351 policy_none.go:49] "None policy: Start" May 16 16:37:12.083727 kubelet[2351]: I0516 16:37:12.083707 2351 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 16:37:12.083806 kubelet[2351]: I0516 16:37:12.083739 2351 state_mem.go:35] "Initializing new in-memory state store" May 16 16:37:12.090096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 16:37:12.094938 kubelet[2351]: E0516 16:37:12.094901 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:12.105151 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 16:37:12.111105 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 16:37:12.132390 kubelet[2351]: I0516 16:37:12.132368 2351 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:37:12.132727 kubelet[2351]: I0516 16:37:12.132691 2351 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:37:12.132727 kubelet[2351]: I0516 16:37:12.132708 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:37:12.132933 kubelet[2351]: I0516 16:37:12.132916 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:37:12.134396 kubelet[2351]: E0516 16:37:12.134356 2351 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 16:37:12.234253 kubelet[2351]: I0516 16:37:12.234166 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:37:12.234807 kubelet[2351]: E0516 16:37:12.234773 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" May 16 16:37:12.295506 kubelet[2351]: E0516 16:37:12.295451 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="800ms" May 16 16:37:12.419879 systemd[1]: Created slice kubepods-burstable-podf993af515bacf4ef3bf64bc036c78a26.slice - libcontainer container kubepods-burstable-podf993af515bacf4ef3bf64bc036c78a26.slice. May 16 16:37:12.435982 kubelet[2351]: I0516 16:37:12.435959 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:37:12.436361 kubelet[2351]: E0516 16:37:12.436302 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" May 16 16:37:12.446301 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 16:37:12.450508 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 16:37:12.497482 kubelet[2351]: I0516 16:37:12.497374 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 16:37:12.497482 kubelet[2351]: I0516 16:37:12.497414 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:12.497482 kubelet[2351]: I0516 16:37:12.497439 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f993af515bacf4ef3bf64bc036c78a26-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f993af515bacf4ef3bf64bc036c78a26\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:12.497482 kubelet[2351]: I0516 16:37:12.497461 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f993af515bacf4ef3bf64bc036c78a26-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f993af515bacf4ef3bf64bc036c78a26\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:12.497685 kubelet[2351]: I0516 16:37:12.497507 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:12.497685 kubelet[2351]: I0516 16:37:12.497537 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:12.497685 kubelet[2351]: I0516 16:37:12.497561 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:12.497685 kubelet[2351]: I0516 16:37:12.497582 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:12.497685 kubelet[2351]: I0516 16:37:12.497603 2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f993af515bacf4ef3bf64bc036c78a26-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f993af515bacf4ef3bf64bc036c78a26\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:12.548835 kubelet[2351]: W0516 16:37:12.548786 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused May 16 16:37:12.548887 kubelet[2351]: E0516 16:37:12.548843 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:12.743557 kubelet[2351]: E0516 16:37:12.743520 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:12.744293 containerd[1606]: time="2025-05-16T16:37:12.744234975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f993af515bacf4ef3bf64bc036c78a26,Namespace:kube-system,Attempt:0,}" May 16 16:37:12.749487 kubelet[2351]: E0516 16:37:12.749398 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:12.749775 containerd[1606]: time="2025-05-16T16:37:12.749713310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 16:37:12.753422 kubelet[2351]: E0516 16:37:12.753378 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:12.753870 containerd[1606]: time="2025-05-16T16:37:12.753839891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 16:37:12.793794 containerd[1606]: time="2025-05-16T16:37:12.793672545Z" level=info msg="connecting to shim 3dcf08bcd9785fe57c5b002ba3590b8431f277e4890d5cda8a5e82a9570294b5" address="unix:///run/containerd/s/c7b9ac75c6a856c23fff09025fb5503ba1f6b2c9b057cc554bc0d0927768980b" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:12.794078 kubelet[2351]: W0516 16:37:12.794008 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused May 16 16:37:12.794124 kubelet[2351]: E0516 16:37:12.794094 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:12.797315 containerd[1606]: time="2025-05-16T16:37:12.796843764Z" level=info msg="connecting to shim d3c8add2bf350976184c6c1407c7ea385033f6a1455112f021067932e620435f" address="unix:///run/containerd/s/d502649286faabbefa313bfbfc739095937fbb25d472911fd9f6c7db47bd4138" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:12.805836 containerd[1606]: time="2025-05-16T16:37:12.805751482Z" level=info msg="connecting to shim eca5b2caa385919c3501265715f698771fd68141c254503b32d1d8577c8430a3" address="unix:///run/containerd/s/68918294fac9d53945f0aeb7fffef625809746becaff6704156901f70c459133" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:12.826138 kubelet[2351]: W0516 16:37:12.825039 2351 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused May 16 16:37:12.826138 kubelet[2351]: E0516 16:37:12.825133 2351 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" May 16 16:37:12.831404 systemd[1]: Started cri-containerd-3dcf08bcd9785fe57c5b002ba3590b8431f277e4890d5cda8a5e82a9570294b5.scope - libcontainer container 3dcf08bcd9785fe57c5b002ba3590b8431f277e4890d5cda8a5e82a9570294b5. May 16 16:37:12.835529 systemd[1]: Started cri-containerd-eca5b2caa385919c3501265715f698771fd68141c254503b32d1d8577c8430a3.scope - libcontainer container eca5b2caa385919c3501265715f698771fd68141c254503b32d1d8577c8430a3. May 16 16:37:12.838532 kubelet[2351]: I0516 16:37:12.838514 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:37:12.839379 kubelet[2351]: E0516 16:37:12.839355 2351 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" May 16 16:37:12.843616 systemd[1]: Started cri-containerd-d3c8add2bf350976184c6c1407c7ea385033f6a1455112f021067932e620435f.scope - libcontainer container d3c8add2bf350976184c6c1407c7ea385033f6a1455112f021067932e620435f. May 16 16:37:12.892229 containerd[1606]: time="2025-05-16T16:37:12.892087323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dcf08bcd9785fe57c5b002ba3590b8431f277e4890d5cda8a5e82a9570294b5\"" May 16 16:37:12.893347 kubelet[2351]: E0516 16:37:12.893323 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:12.895851 containerd[1606]: time="2025-05-16T16:37:12.895782925Z" level=info msg="CreateContainer within sandbox \"3dcf08bcd9785fe57c5b002ba3590b8431f277e4890d5cda8a5e82a9570294b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 16:37:12.902297 containerd[1606]: time="2025-05-16T16:37:12.902256838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"eca5b2caa385919c3501265715f698771fd68141c254503b32d1d8577c8430a3\"" May 16 16:37:12.903199 kubelet[2351]: E0516 16:37:12.903172 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:12.903787 containerd[1606]: time="2025-05-16T16:37:12.903743697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f993af515bacf4ef3bf64bc036c78a26,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3c8add2bf350976184c6c1407c7ea385033f6a1455112f021067932e620435f\"" May 16 16:37:12.904319 kubelet[2351]: E0516 16:37:12.904267 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:12.905930 containerd[1606]: time="2025-05-16T16:37:12.905893079Z" level=info msg="CreateContainer within sandbox \"eca5b2caa385919c3501265715f698771fd68141c254503b32d1d8577c8430a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 16:37:12.906362 containerd[1606]: time="2025-05-16T16:37:12.906338194Z" level=info msg="CreateContainer within sandbox \"d3c8add2bf350976184c6c1407c7ea385033f6a1455112f021067932e620435f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 16:37:12.907833 containerd[1606]: time="2025-05-16T16:37:12.907802981Z" level=info msg="Container db8f25a3a47fba1587f64c3c4d94e6fa4b82f609af1618224c2bc1019c27a801: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:12.916070 containerd[1606]: time="2025-05-16T16:37:12.915972916Z" level=info msg="CreateContainer within sandbox \"3dcf08bcd9785fe57c5b002ba3590b8431f277e4890d5cda8a5e82a9570294b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db8f25a3a47fba1587f64c3c4d94e6fa4b82f609af1618224c2bc1019c27a801\"" May 16 16:37:12.916708 containerd[1606]: time="2025-05-16T16:37:12.916670374Z" level=info msg="StartContainer for \"db8f25a3a47fba1587f64c3c4d94e6fa4b82f609af1618224c2bc1019c27a801\"" May 16 16:37:12.917736 containerd[1606]: time="2025-05-16T16:37:12.917703161Z" level=info msg="connecting to shim db8f25a3a47fba1587f64c3c4d94e6fa4b82f609af1618224c2bc1019c27a801" address="unix:///run/containerd/s/c7b9ac75c6a856c23fff09025fb5503ba1f6b2c9b057cc554bc0d0927768980b" protocol=ttrpc version=3 May 16 16:37:12.920306 containerd[1606]: time="2025-05-16T16:37:12.920143789Z" level=info msg="Container 25e882cc5ada1e3889277df25edc69f37a4cbd999887f74deb4473c4dd7ebf59: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:12.923270 containerd[1606]: time="2025-05-16T16:37:12.923232222Z" level=info msg="Container af571f5cc4e3b0d9d814bd0fe30a0548a86e56e2215a80adc11f5fd45df89b3c: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:12.930167 containerd[1606]: time="2025-05-16T16:37:12.930128908Z" level=info msg="CreateContainer within sandbox \"d3c8add2bf350976184c6c1407c7ea385033f6a1455112f021067932e620435f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25e882cc5ada1e3889277df25edc69f37a4cbd999887f74deb4473c4dd7ebf59\"" May 16 16:37:12.930807 containerd[1606]: time="2025-05-16T16:37:12.930639596Z" level=info msg="StartContainer for \"25e882cc5ada1e3889277df25edc69f37a4cbd999887f74deb4473c4dd7ebf59\"" May 16 16:37:12.932486 containerd[1606]: time="2025-05-16T16:37:12.932466173Z" level=info msg="connecting to shim 25e882cc5ada1e3889277df25edc69f37a4cbd999887f74deb4473c4dd7ebf59" address="unix:///run/containerd/s/d502649286faabbefa313bfbfc739095937fbb25d472911fd9f6c7db47bd4138" protocol=ttrpc version=3 May 16 16:37:12.933353 containerd[1606]: time="2025-05-16T16:37:12.933109349Z" level=info msg="CreateContainer within sandbox \"eca5b2caa385919c3501265715f698771fd68141c254503b32d1d8577c8430a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af571f5cc4e3b0d9d814bd0fe30a0548a86e56e2215a80adc11f5fd45df89b3c\"" May 16 16:37:12.933728 containerd[1606]: time="2025-05-16T16:37:12.933709665Z" level=info msg="StartContainer for \"af571f5cc4e3b0d9d814bd0fe30a0548a86e56e2215a80adc11f5fd45df89b3c\"" May 16 16:37:12.934932 containerd[1606]: time="2025-05-16T16:37:12.934913493Z" level=info msg="connecting to shim af571f5cc4e3b0d9d814bd0fe30a0548a86e56e2215a80adc11f5fd45df89b3c" address="unix:///run/containerd/s/68918294fac9d53945f0aeb7fffef625809746becaff6704156901f70c459133" protocol=ttrpc version=3 May 16 16:37:12.938457 systemd[1]: Started cri-containerd-db8f25a3a47fba1587f64c3c4d94e6fa4b82f609af1618224c2bc1019c27a801.scope - libcontainer container db8f25a3a47fba1587f64c3c4d94e6fa4b82f609af1618224c2bc1019c27a801. May 16 16:37:12.954431 systemd[1]: Started cri-containerd-25e882cc5ada1e3889277df25edc69f37a4cbd999887f74deb4473c4dd7ebf59.scope - libcontainer container 25e882cc5ada1e3889277df25edc69f37a4cbd999887f74deb4473c4dd7ebf59. May 16 16:37:12.959419 systemd[1]: Started cri-containerd-af571f5cc4e3b0d9d814bd0fe30a0548a86e56e2215a80adc11f5fd45df89b3c.scope - libcontainer container af571f5cc4e3b0d9d814bd0fe30a0548a86e56e2215a80adc11f5fd45df89b3c. May 16 16:37:13.005611 containerd[1606]: time="2025-05-16T16:37:13.005493180Z" level=info msg="StartContainer for \"db8f25a3a47fba1587f64c3c4d94e6fa4b82f609af1618224c2bc1019c27a801\" returns successfully" May 16 16:37:13.016525 containerd[1606]: time="2025-05-16T16:37:13.016439052Z" level=info msg="StartContainer for \"25e882cc5ada1e3889277df25edc69f37a4cbd999887f74deb4473c4dd7ebf59\" returns successfully" May 16 16:37:13.029323 containerd[1606]: time="2025-05-16T16:37:13.029249490Z" level=info msg="StartContainer for \"af571f5cc4e3b0d9d814bd0fe30a0548a86e56e2215a80adc11f5fd45df89b3c\" returns successfully" May 16 16:37:13.641667 kubelet[2351]: I0516 16:37:13.641626 2351 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:37:13.724562 kubelet[2351]: E0516 16:37:13.724444 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:13.727293 kubelet[2351]: E0516 16:37:13.727152 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:13.736465 kubelet[2351]: E0516 16:37:13.736424 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:14.065783 kubelet[2351]: E0516 16:37:14.065365 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 16:37:14.168759 kubelet[2351]: I0516 16:37:14.168713 2351 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 16:37:14.168759 kubelet[2351]: E0516 16:37:14.168745 2351 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 16:37:14.177901 kubelet[2351]: E0516 16:37:14.177856 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.278795 kubelet[2351]: E0516 16:37:14.278749 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.379413 kubelet[2351]: E0516 16:37:14.379274 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.480386 kubelet[2351]: E0516 16:37:14.480343 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.581076 kubelet[2351]: E0516 16:37:14.581022 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.682176 kubelet[2351]: E0516 16:37:14.682041 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.737027 kubelet[2351]: E0516 16:37:14.736995 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:14.783243 kubelet[2351]: E0516 16:37:14.783191 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.883826 kubelet[2351]: E0516 16:37:14.883756 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:14.984603 kubelet[2351]: E0516 16:37:14.984435 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:15.085048 kubelet[2351]: E0516 16:37:15.084989 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:15.185161 kubelet[2351]: E0516 16:37:15.185104 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:15.285764 kubelet[2351]: E0516 16:37:15.285637 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:15.386228 kubelet[2351]: E0516 16:37:15.386174 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:15.486962 kubelet[2351]: E0516 16:37:15.486907 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:15.587623 kubelet[2351]: E0516 16:37:15.587498 2351 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:15.680086 kubelet[2351]: I0516 16:37:15.680062 2351 apiserver.go:52] "Watching apiserver" May 16 16:37:15.693965 kubelet[2351]: I0516 16:37:15.693919 2351 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 16:37:15.744814 kubelet[2351]: E0516 16:37:15.744786 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:16.308845 systemd[1]: Reload requested from client PID 2624 ('systemctl') (unit session-7.scope)... May 16 16:37:16.308863 systemd[1]: Reloading... May 16 16:37:16.387318 zram_generator::config[2667]: No configuration found. May 16 16:37:16.739471 kubelet[2351]: E0516 16:37:16.739374 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:16.878693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:37:17.005173 systemd[1]: Reloading finished in 695 ms. May 16 16:37:17.032889 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:17.061909 systemd[1]: kubelet.service: Deactivated successfully. May 16 16:37:17.062315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:17.062379 systemd[1]: kubelet.service: Consumed 730ms CPU time, 128.3M memory peak. May 16 16:37:17.064500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:37:17.271200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:37:17.275694 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:37:17.319992 kubelet[2712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:17.319992 kubelet[2712]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 16:37:17.319992 kubelet[2712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:37:17.320605 kubelet[2712]: I0516 16:37:17.320094 2712 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:37:17.329266 kubelet[2712]: I0516 16:37:17.329217 2712 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 16:37:17.329266 kubelet[2712]: I0516 16:37:17.329248 2712 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:37:17.329628 kubelet[2712]: I0516 16:37:17.329610 2712 server.go:934] "Client rotation is on, will bootstrap in background" May 16 16:37:17.331984 kubelet[2712]: I0516 16:37:17.331948 2712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 16:37:17.335106 kubelet[2712]: I0516 16:37:17.335072 2712 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:37:17.338940 kubelet[2712]: I0516 16:37:17.338909 2712 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:37:17.343840 kubelet[2712]: I0516 16:37:17.343817 2712 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:37:17.343934 kubelet[2712]: I0516 16:37:17.343906 2712 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 16:37:17.344074 kubelet[2712]: I0516 16:37:17.344037 2712 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:37:17.344208 kubelet[2712]: I0516 16:37:17.344062 2712 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:37:17.344311 kubelet[2712]: I0516 16:37:17.344213 2712 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:37:17.344311 kubelet[2712]: I0516 16:37:17.344221 2712 container_manager_linux.go:300] "Creating device plugin manager" May 16 16:37:17.344311 kubelet[2712]: I0516 16:37:17.344244 2712 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:17.344380 kubelet[2712]: I0516 16:37:17.344366 2712 kubelet.go:408] "Attempting to sync node with API server" May 16 16:37:17.344380 kubelet[2712]: I0516 16:37:17.344377 2712 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:37:17.344427 kubelet[2712]: I0516 16:37:17.344407 2712 kubelet.go:314] "Adding apiserver pod source" May 16 16:37:17.344427 kubelet[2712]: I0516 16:37:17.344417 2712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:37:17.346780 kubelet[2712]: I0516 16:37:17.344788 2712 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:37:17.346892 kubelet[2712]: I0516 16:37:17.346870 2712 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:37:17.347734 kubelet[2712]: I0516 16:37:17.347714 2712 server.go:1274] "Started kubelet" May 16 16:37:17.350376 kubelet[2712]: I0516 16:37:17.350362 2712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:37:17.350651 kubelet[2712]: I0516 16:37:17.350625 2712 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:37:17.351750 kubelet[2712]: I0516 16:37:17.350805 2712 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:37:17.351962 kubelet[2712]: I0516 16:37:17.351938 2712 server.go:449] "Adding debug handlers to kubelet server" May 16 16:37:17.353388 kubelet[2712]: I0516 16:37:17.353327 2712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:37:17.355702 kubelet[2712]: I0516 16:37:17.355672 2712 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:37:17.360563 kubelet[2712]: I0516 16:37:17.360540 2712 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 16:37:17.360734 kubelet[2712]: E0516 16:37:17.360714 2712 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:37:17.361797 kubelet[2712]: I0516 16:37:17.361774 2712 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:37:17.364353 kubelet[2712]: E0516 16:37:17.364322 2712 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:37:17.366921 kubelet[2712]: I0516 16:37:17.366317 2712 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 16:37:17.367052 kubelet[2712]: I0516 16:37:17.367033 2712 reconciler.go:26] "Reconciler: start to sync state" May 16 16:37:17.367269 kubelet[2712]: I0516 16:37:17.367246 2712 factory.go:221] Registration of the containerd container factory successfully May 16 16:37:17.367269 kubelet[2712]: I0516 16:37:17.367264 2712 factory.go:221] Registration of the systemd container factory successfully May 16 16:37:17.368984 kubelet[2712]: I0516 16:37:17.368955 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:37:17.370181 kubelet[2712]: I0516 16:37:17.370159 2712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:37:17.370252 kubelet[2712]: I0516 16:37:17.370243 2712 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 16:37:17.370343 kubelet[2712]: I0516 16:37:17.370333 2712 kubelet.go:2321] "Starting kubelet main sync loop" May 16 16:37:17.370429 kubelet[2712]: E0516 16:37:17.370415 2712 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:37:17.402262 kubelet[2712]: I0516 16:37:17.402228 2712 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 16:37:17.402262 kubelet[2712]: I0516 16:37:17.402246 2712 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 16:37:17.402262 kubelet[2712]: I0516 16:37:17.402265 2712 state_mem.go:36] "Initialized new in-memory state store" May 16 16:37:17.402510 kubelet[2712]: I0516 16:37:17.402491 2712 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 16:37:17.402580 kubelet[2712]: I0516 16:37:17.402505 2712 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 16:37:17.402580 kubelet[2712]: I0516 16:37:17.402532 2712 policy_none.go:49] "None policy: Start" May 16 16:37:17.403326 kubelet[2712]: I0516 16:37:17.403182 2712 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 16:37:17.403326 kubelet[2712]: I0516 16:37:17.403214 2712 state_mem.go:35] "Initializing new in-memory state store" May 16 16:37:17.403492 kubelet[2712]: I0516 16:37:17.403439 2712 state_mem.go:75] "Updated machine memory state" May 16 16:37:17.408335 kubelet[2712]: I0516 16:37:17.408319 2712 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:37:17.408829 kubelet[2712]: I0516 16:37:17.408808 2712 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:37:17.408974 kubelet[2712]: I0516 16:37:17.408900 2712 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:37:17.409184 kubelet[2712]: I0516 16:37:17.409152 2712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:37:17.477188 kubelet[2712]: E0516 16:37:17.477136 2712 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:37:17.515358 kubelet[2712]: I0516 16:37:17.515326 2712 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 16:37:17.521850 kubelet[2712]: I0516 16:37:17.521748 2712 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 16:37:17.521850 kubelet[2712]: I0516 16:37:17.521842 2712 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 16:37:17.568406 kubelet[2712]: I0516 16:37:17.568361 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:17.568406 kubelet[2712]: I0516 16:37:17.568395 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:17.568406 kubelet[2712]: I0516 16:37:17.568420 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 16:37:17.568622 kubelet[2712]: I0516 16:37:17.568436 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f993af515bacf4ef3bf64bc036c78a26-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f993af515bacf4ef3bf64bc036c78a26\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:17.568622 kubelet[2712]: I0516 16:37:17.568491 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f993af515bacf4ef3bf64bc036c78a26-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f993af515bacf4ef3bf64bc036c78a26\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:17.568622 kubelet[2712]: I0516 16:37:17.568547 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:17.568622 kubelet[2712]: I0516 16:37:17.568566 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:17.568622 kubelet[2712]: I0516 16:37:17.568584 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:37:17.568730 kubelet[2712]: I0516 16:37:17.568599 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f993af515bacf4ef3bf64bc036c78a26-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f993af515bacf4ef3bf64bc036c78a26\") " pod="kube-system/kube-apiserver-localhost" May 16 16:37:17.776630 kubelet[2712]: E0516 16:37:17.776499 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:17.777693 kubelet[2712]: E0516 16:37:17.777673 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:17.777828 kubelet[2712]: E0516 16:37:17.777803 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:18.347518 kubelet[2712]: I0516 16:37:18.347444 2712 apiserver.go:52] "Watching apiserver" May 16 16:37:18.367041 kubelet[2712]: I0516 16:37:18.366967 2712 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 16:37:18.381772 kubelet[2712]: E0516 16:37:18.381738 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:18.382396 kubelet[2712]: E0516 16:37:18.382315 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:18.396309 kubelet[2712]: E0516 16:37:18.396143 2712 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:37:18.396436 kubelet[2712]: E0516 16:37:18.396359 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:18.403111 kubelet[2712]: I0516 16:37:18.403041 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.403014307 podStartE2EDuration="1.403014307s" podCreationTimestamp="2025-05-16 16:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:18.402707502 +0000 UTC m=+1.121600323" watchObservedRunningTime="2025-05-16 16:37:18.403014307 +0000 UTC m=+1.121907118" May 16 16:37:18.410174 kubelet[2712]: I0516 16:37:18.410100 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.410078488 podStartE2EDuration="1.410078488s" podCreationTimestamp="2025-05-16 16:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:18.41006906 +0000 UTC m=+1.128961871" watchObservedRunningTime="2025-05-16 16:37:18.410078488 +0000 UTC m=+1.128971299" May 16 16:37:18.416439 kubelet[2712]: I0516 16:37:18.416384 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.416372994 podStartE2EDuration="3.416372994s" podCreationTimestamp="2025-05-16 16:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:18.416194129 +0000 UTC m=+1.135086940" watchObservedRunningTime="2025-05-16 16:37:18.416372994 +0000 UTC m=+1.135265805" May 16 16:37:19.382558 kubelet[2712]: E0516 16:37:19.382426 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:19.382558 kubelet[2712]: E0516 16:37:19.382459 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:22.341505 kubelet[2712]: I0516 16:37:22.341436 2712 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 16:37:22.341997 containerd[1606]: time="2025-05-16T16:37:22.341805343Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 16:37:22.342338 kubelet[2712]: I0516 16:37:22.342005 2712 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 16:37:23.250824 systemd[1]: Created slice kubepods-besteffort-podc38c8505_a4cf_436b_aa6d_94194afd2a4f.slice - libcontainer container kubepods-besteffort-podc38c8505_a4cf_436b_aa6d_94194afd2a4f.slice. May 16 16:37:23.300116 kubelet[2712]: I0516 16:37:23.300073 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c38c8505-a4cf-436b-aa6d-94194afd2a4f-kube-proxy\") pod \"kube-proxy-pkpn7\" (UID: \"c38c8505-a4cf-436b-aa6d-94194afd2a4f\") " pod="kube-system/kube-proxy-pkpn7" May 16 16:37:23.300116 kubelet[2712]: I0516 16:37:23.300114 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c38c8505-a4cf-436b-aa6d-94194afd2a4f-xtables-lock\") pod \"kube-proxy-pkpn7\" (UID: \"c38c8505-a4cf-436b-aa6d-94194afd2a4f\") " pod="kube-system/kube-proxy-pkpn7" May 16 16:37:23.300116 kubelet[2712]: I0516 16:37:23.300129 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c38c8505-a4cf-436b-aa6d-94194afd2a4f-lib-modules\") pod \"kube-proxy-pkpn7\" (UID: \"c38c8505-a4cf-436b-aa6d-94194afd2a4f\") " pod="kube-system/kube-proxy-pkpn7" May 16 16:37:23.300347 kubelet[2712]: I0516 16:37:23.300150 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfgw\" (UniqueName: \"kubernetes.io/projected/c38c8505-a4cf-436b-aa6d-94194afd2a4f-kube-api-access-hpfgw\") pod \"kube-proxy-pkpn7\" (UID: \"c38c8505-a4cf-436b-aa6d-94194afd2a4f\") " pod="kube-system/kube-proxy-pkpn7" May 16 16:37:23.463770 systemd[1]: Created slice kubepods-besteffort-podcbf9a8ff_6d32_4dc7_847d_dc6994dd45cb.slice - libcontainer container kubepods-besteffort-podcbf9a8ff_6d32_4dc7_847d_dc6994dd45cb.slice. May 16 16:37:23.501298 kubelet[2712]: I0516 16:37:23.501168 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cbf9a8ff-6d32-4dc7-847d-dc6994dd45cb-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-52gnf\" (UID: \"cbf9a8ff-6d32-4dc7-847d-dc6994dd45cb\") " pod="tigera-operator/tigera-operator-7c5755cdcb-52gnf" May 16 16:37:23.501298 kubelet[2712]: I0516 16:37:23.501208 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrx5b\" (UniqueName: \"kubernetes.io/projected/cbf9a8ff-6d32-4dc7-847d-dc6994dd45cb-kube-api-access-zrx5b\") pod \"tigera-operator-7c5755cdcb-52gnf\" (UID: \"cbf9a8ff-6d32-4dc7-847d-dc6994dd45cb\") " pod="tigera-operator/tigera-operator-7c5755cdcb-52gnf" May 16 16:37:23.561557 kubelet[2712]: E0516 16:37:23.561496 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:23.562473 containerd[1606]: time="2025-05-16T16:37:23.562416642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pkpn7,Uid:c38c8505-a4cf-436b-aa6d-94194afd2a4f,Namespace:kube-system,Attempt:0,}" May 16 16:37:23.584256 containerd[1606]: time="2025-05-16T16:37:23.584199681Z" level=info msg="connecting to shim a89be62dedce4293c6fc785b524f11675bc007a34da98550ce91eebea3ff1e4a" address="unix:///run/containerd/s/c591ac3d851fc6096197f66c0d0b5155b7cc11ee941a7032f053f29ca77a3d58" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:23.620433 systemd[1]: Started cri-containerd-a89be62dedce4293c6fc785b524f11675bc007a34da98550ce91eebea3ff1e4a.scope - libcontainer container a89be62dedce4293c6fc785b524f11675bc007a34da98550ce91eebea3ff1e4a. May 16 16:37:23.647462 containerd[1606]: time="2025-05-16T16:37:23.647403515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pkpn7,Uid:c38c8505-a4cf-436b-aa6d-94194afd2a4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a89be62dedce4293c6fc785b524f11675bc007a34da98550ce91eebea3ff1e4a\"" May 16 16:37:23.648253 kubelet[2712]: E0516 16:37:23.648224 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:23.650539 containerd[1606]: time="2025-05-16T16:37:23.650489956Z" level=info msg="CreateContainer within sandbox \"a89be62dedce4293c6fc785b524f11675bc007a34da98550ce91eebea3ff1e4a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 16:37:23.661359 containerd[1606]: time="2025-05-16T16:37:23.661318269Z" level=info msg="Container ebe1e4b12a8c42ba11bcfc2feef5eaf19795c9085ded66b3ee89145f5ec7f925: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:23.674137 containerd[1606]: time="2025-05-16T16:37:23.674091204Z" level=info msg="CreateContainer within sandbox \"a89be62dedce4293c6fc785b524f11675bc007a34da98550ce91eebea3ff1e4a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ebe1e4b12a8c42ba11bcfc2feef5eaf19795c9085ded66b3ee89145f5ec7f925\"" May 16 16:37:23.675836 containerd[1606]: time="2025-05-16T16:37:23.674721537Z" level=info msg="StartContainer for \"ebe1e4b12a8c42ba11bcfc2feef5eaf19795c9085ded66b3ee89145f5ec7f925\"" May 16 16:37:23.676310 containerd[1606]: time="2025-05-16T16:37:23.676265229Z" level=info msg="connecting to shim ebe1e4b12a8c42ba11bcfc2feef5eaf19795c9085ded66b3ee89145f5ec7f925" address="unix:///run/containerd/s/c591ac3d851fc6096197f66c0d0b5155b7cc11ee941a7032f053f29ca77a3d58" protocol=ttrpc version=3 May 16 16:37:23.699484 systemd[1]: Started cri-containerd-ebe1e4b12a8c42ba11bcfc2feef5eaf19795c9085ded66b3ee89145f5ec7f925.scope - libcontainer container ebe1e4b12a8c42ba11bcfc2feef5eaf19795c9085ded66b3ee89145f5ec7f925. May 16 16:37:23.742765 containerd[1606]: time="2025-05-16T16:37:23.742722286Z" level=info msg="StartContainer for \"ebe1e4b12a8c42ba11bcfc2feef5eaf19795c9085ded66b3ee89145f5ec7f925\" returns successfully" May 16 16:37:23.767692 containerd[1606]: time="2025-05-16T16:37:23.767554242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-52gnf,Uid:cbf9a8ff-6d32-4dc7-847d-dc6994dd45cb,Namespace:tigera-operator,Attempt:0,}" May 16 16:37:23.787797 containerd[1606]: time="2025-05-16T16:37:23.787749295Z" level=info msg="connecting to shim 8208767f20a7921818c2fc6fee5a22e745732e0230510d0e5b5d08ae12a257b9" address="unix:///run/containerd/s/f4910f2251e18db2d5438f203574bd7889ec1e0a51800dd6455193dedb8cd1dd" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:23.815420 systemd[1]: Started cri-containerd-8208767f20a7921818c2fc6fee5a22e745732e0230510d0e5b5d08ae12a257b9.scope - libcontainer container 8208767f20a7921818c2fc6fee5a22e745732e0230510d0e5b5d08ae12a257b9. May 16 16:37:23.825213 kubelet[2712]: E0516 16:37:23.824878 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:23.859991 containerd[1606]: time="2025-05-16T16:37:23.859952892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-52gnf,Uid:cbf9a8ff-6d32-4dc7-847d-dc6994dd45cb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8208767f20a7921818c2fc6fee5a22e745732e0230510d0e5b5d08ae12a257b9\"" May 16 16:37:23.861745 containerd[1606]: time="2025-05-16T16:37:23.861708912Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 16:37:24.393339 kubelet[2712]: E0516 16:37:24.392889 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:24.393339 kubelet[2712]: E0516 16:37:24.393027 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:24.411224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717351908.mount: Deactivated successfully. May 16 16:37:25.277693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231384549.mount: Deactivated successfully. May 16 16:37:25.877487 containerd[1606]: time="2025-05-16T16:37:25.877412773Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:25.878092 containerd[1606]: time="2025-05-16T16:37:25.878038023Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 16 16:37:25.879123 containerd[1606]: time="2025-05-16T16:37:25.879088679Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:25.881054 containerd[1606]: time="2025-05-16T16:37:25.881017520Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:25.881588 containerd[1606]: time="2025-05-16T16:37:25.881544942Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.019791253s" May 16 16:37:25.881623 containerd[1606]: time="2025-05-16T16:37:25.881588215Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 16 16:37:25.883803 containerd[1606]: time="2025-05-16T16:37:25.883578985Z" level=info msg="CreateContainer within sandbox \"8208767f20a7921818c2fc6fee5a22e745732e0230510d0e5b5d08ae12a257b9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 16:37:25.893159 containerd[1606]: time="2025-05-16T16:37:25.893107399Z" level=info msg="Container 4113baa5b88287a0c8012779aae9fac9590cc5dba22c6dae1cf94b5c92f74d84: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:25.900198 containerd[1606]: time="2025-05-16T16:37:25.900103634Z" level=info msg="CreateContainer within sandbox \"8208767f20a7921818c2fc6fee5a22e745732e0230510d0e5b5d08ae12a257b9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4113baa5b88287a0c8012779aae9fac9590cc5dba22c6dae1cf94b5c92f74d84\"" May 16 16:37:25.900642 containerd[1606]: time="2025-05-16T16:37:25.900614113Z" level=info msg="StartContainer for \"4113baa5b88287a0c8012779aae9fac9590cc5dba22c6dae1cf94b5c92f74d84\"" May 16 16:37:25.901575 containerd[1606]: time="2025-05-16T16:37:25.901540380Z" level=info msg="connecting to shim 4113baa5b88287a0c8012779aae9fac9590cc5dba22c6dae1cf94b5c92f74d84" address="unix:///run/containerd/s/f4910f2251e18db2d5438f203574bd7889ec1e0a51800dd6455193dedb8cd1dd" protocol=ttrpc version=3 May 16 16:37:25.956416 systemd[1]: Started cri-containerd-4113baa5b88287a0c8012779aae9fac9590cc5dba22c6dae1cf94b5c92f74d84.scope - libcontainer container 4113baa5b88287a0c8012779aae9fac9590cc5dba22c6dae1cf94b5c92f74d84. May 16 16:37:25.986734 containerd[1606]: time="2025-05-16T16:37:25.986688849Z" level=info msg="StartContainer for \"4113baa5b88287a0c8012779aae9fac9590cc5dba22c6dae1cf94b5c92f74d84\" returns successfully" May 16 16:37:26.310912 kubelet[2712]: E0516 16:37:26.310790 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:26.322555 kubelet[2712]: I0516 16:37:26.322474 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pkpn7" podStartSLOduration=3.322453836 podStartE2EDuration="3.322453836s" podCreationTimestamp="2025-05-16 16:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:37:24.406313546 +0000 UTC m=+7.125206357" watchObservedRunningTime="2025-05-16 16:37:26.322453836 +0000 UTC m=+9.041346647" May 16 16:37:26.397713 kubelet[2712]: E0516 16:37:26.397659 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:29.269936 kubelet[2712]: E0516 16:37:29.269903 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:29.280744 kubelet[2712]: I0516 16:37:29.280692 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-52gnf" podStartSLOduration=4.259413254 podStartE2EDuration="6.280677478s" podCreationTimestamp="2025-05-16 16:37:23 +0000 UTC" firstStartedPulling="2025-05-16 16:37:23.861108918 +0000 UTC m=+6.580001729" lastFinishedPulling="2025-05-16 16:37:25.882373142 +0000 UTC m=+8.601265953" observedRunningTime="2025-05-16 16:37:26.411414931 +0000 UTC m=+9.130307742" watchObservedRunningTime="2025-05-16 16:37:29.280677478 +0000 UTC m=+11.999570279" May 16 16:37:31.056179 sudo[1800]: pam_unix(sudo:session): session closed for user root May 16 16:37:31.063306 sshd[1799]: Connection closed by 10.0.0.1 port 51898 May 16 16:37:31.060882 sshd-session[1797]: pam_unix(sshd:session): session closed for user core May 16 16:37:31.069318 systemd[1]: sshd@6-10.0.0.37:22-10.0.0.1:51898.service: Deactivated successfully. May 16 16:37:31.074022 systemd[1]: session-7.scope: Deactivated successfully. May 16 16:37:31.074250 systemd[1]: session-7.scope: Consumed 4.711s CPU time, 226.6M memory peak. May 16 16:37:31.075912 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. May 16 16:37:31.080691 systemd-logind[1582]: Removed session 7. May 16 16:37:33.440445 systemd[1]: Created slice kubepods-besteffort-pod5b78a628_03ba_4b5e_9474_21db4077325d.slice - libcontainer container kubepods-besteffort-pod5b78a628_03ba_4b5e_9474_21db4077325d.slice. May 16 16:37:33.468670 kubelet[2712]: I0516 16:37:33.468604 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9dv\" (UniqueName: \"kubernetes.io/projected/5b78a628-03ba-4b5e-9474-21db4077325d-kube-api-access-4d9dv\") pod \"calico-typha-77898f4f66-pjq27\" (UID: \"5b78a628-03ba-4b5e-9474-21db4077325d\") " pod="calico-system/calico-typha-77898f4f66-pjq27" May 16 16:37:33.468670 kubelet[2712]: I0516 16:37:33.468654 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b78a628-03ba-4b5e-9474-21db4077325d-tigera-ca-bundle\") pod \"calico-typha-77898f4f66-pjq27\" (UID: \"5b78a628-03ba-4b5e-9474-21db4077325d\") " pod="calico-system/calico-typha-77898f4f66-pjq27" May 16 16:37:33.468670 kubelet[2712]: I0516 16:37:33.468674 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b78a628-03ba-4b5e-9474-21db4077325d-typha-certs\") pod \"calico-typha-77898f4f66-pjq27\" (UID: \"5b78a628-03ba-4b5e-9474-21db4077325d\") " pod="calico-system/calico-typha-77898f4f66-pjq27" May 16 16:37:33.746609 kubelet[2712]: E0516 16:37:33.746225 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:33.747266 containerd[1606]: time="2025-05-16T16:37:33.747170297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77898f4f66-pjq27,Uid:5b78a628-03ba-4b5e-9474-21db4077325d,Namespace:calico-system,Attempt:0,}" May 16 16:37:33.884835 systemd[1]: Created slice kubepods-besteffort-pod43c99158_22f0_4c8b_b55f_94b88138ab68.slice - libcontainer container kubepods-besteffort-pod43c99158_22f0_4c8b_b55f_94b88138ab68.slice. May 16 16:37:33.918976 containerd[1606]: time="2025-05-16T16:37:33.918903081Z" level=info msg="connecting to shim 5f891435a6403e03e3b86f7ed748fd1687507afbe91945b4642b0acab2eab0ba" address="unix:///run/containerd/s/b719c471e0750a2c6be53b776a69d119dc24340b86820c023896ecd9bb7d44a3" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:33.948840 systemd[1]: Started cri-containerd-5f891435a6403e03e3b86f7ed748fd1687507afbe91945b4642b0acab2eab0ba.scope - libcontainer container 5f891435a6403e03e3b86f7ed748fd1687507afbe91945b4642b0acab2eab0ba. May 16 16:37:33.974541 kubelet[2712]: I0516 16:37:33.974393 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-var-run-calico\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.974709 kubelet[2712]: I0516 16:37:33.974579 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-xtables-lock\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.974709 kubelet[2712]: I0516 16:37:33.974663 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-cni-log-dir\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.974773 kubelet[2712]: I0516 16:37:33.974736 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-var-lib-calico\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.974773 kubelet[2712]: I0516 16:37:33.974764 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-policysync\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.974874 kubelet[2712]: I0516 16:37:33.974839 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-cni-bin-dir\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.974969 kubelet[2712]: I0516 16:37:33.974932 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-flexvol-driver-host\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.975118 kubelet[2712]: I0516 16:37:33.975038 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/43c99158-22f0-4c8b-b55f-94b88138ab68-node-certs\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.975267 kubelet[2712]: I0516 16:37:33.975198 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-cni-net-dir\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.975344 kubelet[2712]: I0516 16:37:33.975313 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjgr\" (UniqueName: \"kubernetes.io/projected/43c99158-22f0-4c8b-b55f-94b88138ab68-kube-api-access-7vjgr\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.975447 kubelet[2712]: I0516 16:37:33.975400 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43c99158-22f0-4c8b-b55f-94b88138ab68-tigera-ca-bundle\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:33.975598 kubelet[2712]: I0516 16:37:33.975570 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43c99158-22f0-4c8b-b55f-94b88138ab68-lib-modules\") pod \"calico-node-brmm8\" (UID: \"43c99158-22f0-4c8b-b55f-94b88138ab68\") " pod="calico-system/calico-node-brmm8" May 16 16:37:34.071306 containerd[1606]: time="2025-05-16T16:37:34.071010996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77898f4f66-pjq27,Uid:5b78a628-03ba-4b5e-9474-21db4077325d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f891435a6403e03e3b86f7ed748fd1687507afbe91945b4642b0acab2eab0ba\"" May 16 16:37:34.073707 kubelet[2712]: E0516 16:37:34.073651 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:34.075857 containerd[1606]: time="2025-05-16T16:37:34.075780417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 16:37:34.081845 kubelet[2712]: E0516 16:37:34.081690 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cf5pq" podUID="4a53a0ed-691f-460a-8f34-788759fa4d73" May 16 16:37:34.082491 kubelet[2712]: E0516 16:37:34.082445 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.082553 kubelet[2712]: W0516 16:37:34.082490 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.082553 kubelet[2712]: E0516 16:37:34.082521 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.084716 kubelet[2712]: E0516 16:37:34.084687 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.084716 kubelet[2712]: W0516 16:37:34.084710 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.084794 kubelet[2712]: E0516 16:37:34.084728 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.097032 kubelet[2712]: E0516 16:37:34.096978 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.097032 kubelet[2712]: W0516 16:37:34.097003 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.097032 kubelet[2712]: E0516 16:37:34.097025 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.166569 kubelet[2712]: E0516 16:37:34.166517 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.166569 kubelet[2712]: W0516 16:37:34.166548 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.166569 kubelet[2712]: E0516 16:37:34.166576 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.166923 kubelet[2712]: E0516 16:37:34.166895 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.167002 kubelet[2712]: W0516 16:37:34.166916 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.167002 kubelet[2712]: E0516 16:37:34.166962 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.167465 kubelet[2712]: E0516 16:37:34.167335 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.167465 kubelet[2712]: W0516 16:37:34.167353 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.167465 kubelet[2712]: E0516 16:37:34.167366 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.167697 kubelet[2712]: E0516 16:37:34.167653 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.167697 kubelet[2712]: W0516 16:37:34.167668 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.167697 kubelet[2712]: E0516 16:37:34.167708 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.168026 kubelet[2712]: E0516 16:37:34.168012 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.168062 kubelet[2712]: W0516 16:37:34.168025 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.168062 kubelet[2712]: E0516 16:37:34.168038 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.168335 kubelet[2712]: E0516 16:37:34.168306 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.168397 kubelet[2712]: W0516 16:37:34.168335 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.168397 kubelet[2712]: E0516 16:37:34.168369 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.168666 kubelet[2712]: E0516 16:37:34.168645 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.168666 kubelet[2712]: W0516 16:37:34.168660 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.168765 kubelet[2712]: E0516 16:37:34.168672 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.168854 kubelet[2712]: E0516 16:37:34.168838 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.168854 kubelet[2712]: W0516 16:37:34.168849 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.168907 kubelet[2712]: E0516 16:37:34.168858 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.169015 kubelet[2712]: E0516 16:37:34.169000 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.169015 kubelet[2712]: W0516 16:37:34.169010 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.169077 kubelet[2712]: E0516 16:37:34.169018 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.169175 kubelet[2712]: E0516 16:37:34.169160 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.169175 kubelet[2712]: W0516 16:37:34.169170 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.169227 kubelet[2712]: E0516 16:37:34.169179 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.169364 kubelet[2712]: E0516 16:37:34.169347 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.169364 kubelet[2712]: W0516 16:37:34.169360 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.169414 kubelet[2712]: E0516 16:37:34.169369 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.169561 kubelet[2712]: E0516 16:37:34.169537 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.169561 kubelet[2712]: W0516 16:37:34.169550 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.169561 kubelet[2712]: E0516 16:37:34.169559 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.169721 kubelet[2712]: E0516 16:37:34.169704 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.169721 kubelet[2712]: W0516 16:37:34.169718 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.169767 kubelet[2712]: E0516 16:37:34.169727 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.169895 kubelet[2712]: E0516 16:37:34.169880 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.169895 kubelet[2712]: W0516 16:37:34.169890 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.169948 kubelet[2712]: E0516 16:37:34.169898 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.170039 kubelet[2712]: E0516 16:37:34.170025 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.170039 kubelet[2712]: W0516 16:37:34.170035 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.170097 kubelet[2712]: E0516 16:37:34.170042 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.170181 kubelet[2712]: E0516 16:37:34.170167 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.170181 kubelet[2712]: W0516 16:37:34.170177 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.170238 kubelet[2712]: E0516 16:37:34.170185 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.170410 kubelet[2712]: E0516 16:37:34.170378 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.170410 kubelet[2712]: W0516 16:37:34.170394 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.170410 kubelet[2712]: E0516 16:37:34.170404 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.170601 kubelet[2712]: E0516 16:37:34.170586 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.170601 kubelet[2712]: W0516 16:37:34.170597 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.170662 kubelet[2712]: E0516 16:37:34.170606 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.170782 kubelet[2712]: E0516 16:37:34.170768 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.170782 kubelet[2712]: W0516 16:37:34.170778 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.170832 kubelet[2712]: E0516 16:37:34.170789 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.170963 kubelet[2712]: E0516 16:37:34.170948 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.170963 kubelet[2712]: W0516 16:37:34.170959 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.171011 kubelet[2712]: E0516 16:37:34.170966 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.177271 kubelet[2712]: E0516 16:37:34.177249 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.177271 kubelet[2712]: W0516 16:37:34.177262 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.177271 kubelet[2712]: E0516 16:37:34.177272 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.177385 kubelet[2712]: I0516 16:37:34.177323 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a53a0ed-691f-460a-8f34-788759fa4d73-kubelet-dir\") pod \"csi-node-driver-cf5pq\" (UID: \"4a53a0ed-691f-460a-8f34-788759fa4d73\") " pod="calico-system/csi-node-driver-cf5pq" May 16 16:37:34.177560 kubelet[2712]: E0516 16:37:34.177540 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.177560 kubelet[2712]: W0516 16:37:34.177554 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.177609 kubelet[2712]: E0516 16:37:34.177572 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.177609 kubelet[2712]: I0516 16:37:34.177592 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4a53a0ed-691f-460a-8f34-788759fa4d73-registration-dir\") pod \"csi-node-driver-cf5pq\" (UID: \"4a53a0ed-691f-460a-8f34-788759fa4d73\") " pod="calico-system/csi-node-driver-cf5pq" May 16 16:37:34.177809 kubelet[2712]: E0516 16:37:34.177791 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.177809 kubelet[2712]: W0516 16:37:34.177805 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.177862 kubelet[2712]: E0516 16:37:34.177821 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.177950 kubelet[2712]: I0516 16:37:34.177924 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4kq\" (UniqueName: \"kubernetes.io/projected/4a53a0ed-691f-460a-8f34-788759fa4d73-kube-api-access-8m4kq\") pod \"csi-node-driver-cf5pq\" (UID: \"4a53a0ed-691f-460a-8f34-788759fa4d73\") " pod="calico-system/csi-node-driver-cf5pq" May 16 16:37:34.178044 kubelet[2712]: E0516 16:37:34.178030 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.178065 kubelet[2712]: W0516 16:37:34.178043 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.178065 kubelet[2712]: E0516 16:37:34.178060 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.178258 kubelet[2712]: E0516 16:37:34.178240 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.178258 kubelet[2712]: W0516 16:37:34.178252 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.178328 kubelet[2712]: E0516 16:37:34.178267 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.178552 kubelet[2712]: E0516 16:37:34.178536 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.178552 kubelet[2712]: W0516 16:37:34.178549 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.178609 kubelet[2712]: E0516 16:37:34.178565 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.178732 kubelet[2712]: E0516 16:37:34.178718 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.178732 kubelet[2712]: W0516 16:37:34.178728 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.178786 kubelet[2712]: E0516 16:37:34.178744 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.178942 kubelet[2712]: E0516 16:37:34.178927 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.178942 kubelet[2712]: W0516 16:37:34.178938 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.178990 kubelet[2712]: E0516 16:37:34.178953 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.179140 kubelet[2712]: E0516 16:37:34.179125 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.179140 kubelet[2712]: W0516 16:37:34.179138 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.179203 kubelet[2712]: E0516 16:37:34.179174 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.179367 kubelet[2712]: E0516 16:37:34.179352 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.179367 kubelet[2712]: W0516 16:37:34.179363 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.179433 kubelet[2712]: E0516 16:37:34.179371 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.179433 kubelet[2712]: I0516 16:37:34.179395 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4a53a0ed-691f-460a-8f34-788759fa4d73-varrun\") pod \"csi-node-driver-cf5pq\" (UID: \"4a53a0ed-691f-460a-8f34-788759fa4d73\") " pod="calico-system/csi-node-driver-cf5pq" May 16 16:37:34.179616 kubelet[2712]: E0516 16:37:34.179599 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.179660 kubelet[2712]: W0516 16:37:34.179616 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.179660 kubelet[2712]: E0516 16:37:34.179633 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.179660 kubelet[2712]: I0516 16:37:34.179655 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4a53a0ed-691f-460a-8f34-788759fa4d73-socket-dir\") pod \"csi-node-driver-cf5pq\" (UID: \"4a53a0ed-691f-460a-8f34-788759fa4d73\") " pod="calico-system/csi-node-driver-cf5pq" May 16 16:37:34.179860 kubelet[2712]: E0516 16:37:34.179846 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.179888 kubelet[2712]: W0516 16:37:34.179859 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.179888 kubelet[2712]: E0516 16:37:34.179876 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.180049 kubelet[2712]: E0516 16:37:34.180037 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.180049 kubelet[2712]: W0516 16:37:34.180047 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.180103 kubelet[2712]: E0516 16:37:34.180060 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.180299 kubelet[2712]: E0516 16:37:34.180266 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.180327 kubelet[2712]: W0516 16:37:34.180304 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.180327 kubelet[2712]: E0516 16:37:34.180318 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.180512 kubelet[2712]: E0516 16:37:34.180499 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.180512 kubelet[2712]: W0516 16:37:34.180510 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.180620 kubelet[2712]: E0516 16:37:34.180520 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.195199 containerd[1606]: time="2025-05-16T16:37:34.195156524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-brmm8,Uid:43c99158-22f0-4c8b-b55f-94b88138ab68,Namespace:calico-system,Attempt:0,}" May 16 16:37:34.220213 containerd[1606]: time="2025-05-16T16:37:34.220148496Z" level=info msg="connecting to shim 24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a" address="unix:///run/containerd/s/0230b55fedd642a7d6e0353c1db6469180f64c1da9b12f7d949070579d049312" namespace=k8s.io protocol=ttrpc version=3 May 16 16:37:34.248438 systemd[1]: Started cri-containerd-24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a.scope - libcontainer container 24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a. May 16 16:37:34.279305 containerd[1606]: time="2025-05-16T16:37:34.279240472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-brmm8,Uid:43c99158-22f0-4c8b-b55f-94b88138ab68,Namespace:calico-system,Attempt:0,} returns sandbox id \"24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a\"" May 16 16:37:34.280405 kubelet[2712]: E0516 16:37:34.280339 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.280405 kubelet[2712]: W0516 16:37:34.280363 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.280405 kubelet[2712]: E0516 16:37:34.280388 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.280673 kubelet[2712]: E0516 16:37:34.280662 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.280704 kubelet[2712]: W0516 16:37:34.280675 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.280704 kubelet[2712]: E0516 16:37:34.280687 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.281300 kubelet[2712]: E0516 16:37:34.280898 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.281300 kubelet[2712]: W0516 16:37:34.280913 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.281300 kubelet[2712]: E0516 16:37:34.280925 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.281300 kubelet[2712]: E0516 16:37:34.281191 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.281300 kubelet[2712]: W0516 16:37:34.281214 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.281300 kubelet[2712]: E0516 16:37:34.281239 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.281470 kubelet[2712]: E0516 16:37:34.281448 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.281470 kubelet[2712]: W0516 16:37:34.281461 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.281578 kubelet[2712]: E0516 16:37:34.281519 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.281831 kubelet[2712]: E0516 16:37:34.281810 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.281875 kubelet[2712]: W0516 16:37:34.281823 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.281875 kubelet[2712]: E0516 16:37:34.281863 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.282215 kubelet[2712]: E0516 16:37:34.282158 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.282215 kubelet[2712]: W0516 16:37:34.282202 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.282319 kubelet[2712]: E0516 16:37:34.282249 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.282584 kubelet[2712]: E0516 16:37:34.282548 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.282584 kubelet[2712]: W0516 16:37:34.282564 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.282656 kubelet[2712]: E0516 16:37:34.282630 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.282818 kubelet[2712]: E0516 16:37:34.282799 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.282818 kubelet[2712]: W0516 16:37:34.282811 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.282977 kubelet[2712]: E0516 16:37:34.282867 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.283023 kubelet[2712]: E0516 16:37:34.283015 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.283060 kubelet[2712]: W0516 16:37:34.283025 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.283116 kubelet[2712]: E0516 16:37:34.283086 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.283226 kubelet[2712]: E0516 16:37:34.283208 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.283226 kubelet[2712]: W0516 16:37:34.283221 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.283305 kubelet[2712]: E0516 16:37:34.283235 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.283500 kubelet[2712]: E0516 16:37:34.283475 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.283500 kubelet[2712]: W0516 16:37:34.283494 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.283578 kubelet[2712]: E0516 16:37:34.283536 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.283982 kubelet[2712]: E0516 16:37:34.283959 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.283982 kubelet[2712]: W0516 16:37:34.283977 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.284206 kubelet[2712]: E0516 16:37:34.284063 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.284508 kubelet[2712]: E0516 16:37:34.284488 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.284558 kubelet[2712]: W0516 16:37:34.284538 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.284590 kubelet[2712]: E0516 16:37:34.284559 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.284902 kubelet[2712]: E0516 16:37:34.284885 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.284902 kubelet[2712]: W0516 16:37:34.284898 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.284977 kubelet[2712]: E0516 16:37:34.284952 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.285100 kubelet[2712]: E0516 16:37:34.285079 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.285100 kubelet[2712]: W0516 16:37:34.285094 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.285174 kubelet[2712]: E0516 16:37:34.285124 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.285416 kubelet[2712]: E0516 16:37:34.285391 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.285416 kubelet[2712]: W0516 16:37:34.285408 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.285565 kubelet[2712]: E0516 16:37:34.285451 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.285660 kubelet[2712]: E0516 16:37:34.285645 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.285660 kubelet[2712]: W0516 16:37:34.285658 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.285735 kubelet[2712]: E0516 16:37:34.285692 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.285977 kubelet[2712]: E0516 16:37:34.285838 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.285977 kubelet[2712]: W0516 16:37:34.285889 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.285977 kubelet[2712]: E0516 16:37:34.285910 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.286266 kubelet[2712]: E0516 16:37:34.286093 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.286266 kubelet[2712]: W0516 16:37:34.286103 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.286266 kubelet[2712]: E0516 16:37:34.286121 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.286696 kubelet[2712]: E0516 16:37:34.286404 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.286696 kubelet[2712]: W0516 16:37:34.286414 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.286696 kubelet[2712]: E0516 16:37:34.286458 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.286696 kubelet[2712]: E0516 16:37:34.286602 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.286696 kubelet[2712]: W0516 16:37:34.286614 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.286696 kubelet[2712]: E0516 16:37:34.286631 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.286896 kubelet[2712]: E0516 16:37:34.286831 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.286896 kubelet[2712]: W0516 16:37:34.286841 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.286896 kubelet[2712]: E0516 16:37:34.286862 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.287131 kubelet[2712]: E0516 16:37:34.287115 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.287131 kubelet[2712]: W0516 16:37:34.287127 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.287202 kubelet[2712]: E0516 16:37:34.287145 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.287431 kubelet[2712]: E0516 16:37:34.287410 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.287431 kubelet[2712]: W0516 16:37:34.287426 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.287535 kubelet[2712]: E0516 16:37:34.287438 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:34.294981 kubelet[2712]: E0516 16:37:34.294915 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:34.294981 kubelet[2712]: W0516 16:37:34.294933 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:34.294981 kubelet[2712]: E0516 16:37:34.294945 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:35.371337 kubelet[2712]: E0516 16:37:35.371222 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cf5pq" podUID="4a53a0ed-691f-460a-8f34-788759fa4d73" May 16 16:37:35.677463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244843179.mount: Deactivated successfully. May 16 16:37:35.865363 update_engine[1588]: I20250516 16:37:35.865249 1588 update_attempter.cc:509] Updating boot flags... May 16 16:37:37.370883 kubelet[2712]: E0516 16:37:37.370827 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cf5pq" podUID="4a53a0ed-691f-460a-8f34-788759fa4d73" May 16 16:37:37.591756 containerd[1606]: time="2025-05-16T16:37:37.591699893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:37.592401 containerd[1606]: time="2025-05-16T16:37:37.592376014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 16 16:37:37.593606 containerd[1606]: time="2025-05-16T16:37:37.593554689Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:37.595450 containerd[1606]: time="2025-05-16T16:37:37.595424072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:37.595935 containerd[1606]: time="2025-05-16T16:37:37.595896137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 3.520077767s" May 16 16:37:37.595968 containerd[1606]: time="2025-05-16T16:37:37.595934219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 16 16:37:37.596893 containerd[1606]: time="2025-05-16T16:37:37.596859873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 16:37:37.603150 containerd[1606]: time="2025-05-16T16:37:37.603112997Z" level=info msg="CreateContainer within sandbox \"5f891435a6403e03e3b86f7ed748fd1687507afbe91945b4642b0acab2eab0ba\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 16:37:37.612671 containerd[1606]: time="2025-05-16T16:37:37.612633144Z" level=info msg="Container 266f53ba033822c47a33fd15409310164bfb9296b2914ed7a160cb12d682f615: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:37.620000 containerd[1606]: time="2025-05-16T16:37:37.619966715Z" level=info msg="CreateContainer within sandbox \"5f891435a6403e03e3b86f7ed748fd1687507afbe91945b4642b0acab2eab0ba\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"266f53ba033822c47a33fd15409310164bfb9296b2914ed7a160cb12d682f615\"" May 16 16:37:37.620415 containerd[1606]: time="2025-05-16T16:37:37.620396499Z" level=info msg="StartContainer for \"266f53ba033822c47a33fd15409310164bfb9296b2914ed7a160cb12d682f615\"" May 16 16:37:37.621599 containerd[1606]: time="2025-05-16T16:37:37.621479723Z" level=info msg="connecting to shim 266f53ba033822c47a33fd15409310164bfb9296b2914ed7a160cb12d682f615" address="unix:///run/containerd/s/b719c471e0750a2c6be53b776a69d119dc24340b86820c023896ecd9bb7d44a3" protocol=ttrpc version=3 May 16 16:37:37.643428 systemd[1]: Started cri-containerd-266f53ba033822c47a33fd15409310164bfb9296b2914ed7a160cb12d682f615.scope - libcontainer container 266f53ba033822c47a33fd15409310164bfb9296b2914ed7a160cb12d682f615. May 16 16:37:37.699453 containerd[1606]: time="2025-05-16T16:37:37.699405249Z" level=info msg="StartContainer for \"266f53ba033822c47a33fd15409310164bfb9296b2914ed7a160cb12d682f615\" returns successfully" May 16 16:37:38.425042 kubelet[2712]: E0516 16:37:38.425005 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:38.501155 kubelet[2712]: E0516 16:37:38.501108 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.501155 kubelet[2712]: W0516 16:37:38.501130 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.501155 kubelet[2712]: E0516 16:37:38.501150 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.501400 kubelet[2712]: E0516 16:37:38.501389 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.501400 kubelet[2712]: W0516 16:37:38.501398 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.501450 kubelet[2712]: E0516 16:37:38.501406 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.501583 kubelet[2712]: E0516 16:37:38.501557 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.501583 kubelet[2712]: W0516 16:37:38.501569 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.501583 kubelet[2712]: E0516 16:37:38.501577 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.501905 kubelet[2712]: E0516 16:37:38.501880 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.501905 kubelet[2712]: W0516 16:37:38.501895 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.501905 kubelet[2712]: E0516 16:37:38.501906 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.502107 kubelet[2712]: E0516 16:37:38.502083 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.502107 kubelet[2712]: W0516 16:37:38.502095 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.502107 kubelet[2712]: E0516 16:37:38.502103 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.502275 kubelet[2712]: E0516 16:37:38.502253 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.502275 kubelet[2712]: W0516 16:37:38.502264 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.502338 kubelet[2712]: E0516 16:37:38.502273 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.502474 kubelet[2712]: E0516 16:37:38.502447 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.502474 kubelet[2712]: W0516 16:37:38.502458 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.502474 kubelet[2712]: E0516 16:37:38.502472 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.502644 kubelet[2712]: E0516 16:37:38.502627 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.502644 kubelet[2712]: W0516 16:37:38.502637 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.502644 kubelet[2712]: E0516 16:37:38.502645 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.502816 kubelet[2712]: E0516 16:37:38.502799 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.502816 kubelet[2712]: W0516 16:37:38.502809 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.502816 kubelet[2712]: E0516 16:37:38.502816 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.502981 kubelet[2712]: E0516 16:37:38.502965 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.502981 kubelet[2712]: W0516 16:37:38.502974 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.502981 kubelet[2712]: E0516 16:37:38.502981 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.503152 kubelet[2712]: E0516 16:37:38.503136 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.503152 kubelet[2712]: W0516 16:37:38.503146 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.503201 kubelet[2712]: E0516 16:37:38.503153 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.503341 kubelet[2712]: E0516 16:37:38.503324 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.503341 kubelet[2712]: W0516 16:37:38.503335 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.503396 kubelet[2712]: E0516 16:37:38.503342 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.503527 kubelet[2712]: E0516 16:37:38.503511 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.503527 kubelet[2712]: W0516 16:37:38.503520 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.503527 kubelet[2712]: E0516 16:37:38.503528 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.503699 kubelet[2712]: E0516 16:37:38.503683 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.503699 kubelet[2712]: W0516 16:37:38.503693 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.503699 kubelet[2712]: E0516 16:37:38.503700 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.503865 kubelet[2712]: E0516 16:37:38.503849 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.503865 kubelet[2712]: W0516 16:37:38.503858 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.503865 kubelet[2712]: E0516 16:37:38.503866 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.517447 kubelet[2712]: E0516 16:37:38.517412 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.517447 kubelet[2712]: W0516 16:37:38.517441 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.517529 kubelet[2712]: E0516 16:37:38.517477 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.517734 kubelet[2712]: E0516 16:37:38.517717 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.517734 kubelet[2712]: W0516 16:37:38.517731 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.517801 kubelet[2712]: E0516 16:37:38.517750 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.517989 kubelet[2712]: E0516 16:37:38.517973 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.517989 kubelet[2712]: W0516 16:37:38.517985 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.518061 kubelet[2712]: E0516 16:37:38.518002 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.518311 kubelet[2712]: E0516 16:37:38.518290 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.518311 kubelet[2712]: W0516 16:37:38.518305 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.518370 kubelet[2712]: E0516 16:37:38.518320 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.518511 kubelet[2712]: E0516 16:37:38.518495 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.518511 kubelet[2712]: W0516 16:37:38.518505 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.518571 kubelet[2712]: E0516 16:37:38.518517 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.518666 kubelet[2712]: E0516 16:37:38.518648 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.518666 kubelet[2712]: W0516 16:37:38.518659 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.518719 kubelet[2712]: E0516 16:37:38.518670 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.518865 kubelet[2712]: E0516 16:37:38.518847 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.518865 kubelet[2712]: W0516 16:37:38.518857 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.518918 kubelet[2712]: E0516 16:37:38.518889 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.519006 kubelet[2712]: E0516 16:37:38.518989 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.519006 kubelet[2712]: W0516 16:37:38.518999 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.519063 kubelet[2712]: E0516 16:37:38.519026 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.519162 kubelet[2712]: E0516 16:37:38.519145 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.519162 kubelet[2712]: W0516 16:37:38.519155 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.519222 kubelet[2712]: E0516 16:37:38.519165 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.519393 kubelet[2712]: E0516 16:37:38.519375 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.519393 kubelet[2712]: W0516 16:37:38.519390 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.519450 kubelet[2712]: E0516 16:37:38.519410 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.519650 kubelet[2712]: E0516 16:37:38.519626 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.519650 kubelet[2712]: W0516 16:37:38.519641 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.519699 kubelet[2712]: E0516 16:37:38.519658 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.519853 kubelet[2712]: E0516 16:37:38.519837 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.519853 kubelet[2712]: W0516 16:37:38.519849 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.519896 kubelet[2712]: E0516 16:37:38.519863 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.520038 kubelet[2712]: E0516 16:37:38.520024 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.520038 kubelet[2712]: W0516 16:37:38.520034 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.520087 kubelet[2712]: E0516 16:37:38.520045 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.520238 kubelet[2712]: E0516 16:37:38.520224 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.520238 kubelet[2712]: W0516 16:37:38.520234 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.520303 kubelet[2712]: E0516 16:37:38.520246 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.520431 kubelet[2712]: E0516 16:37:38.520416 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.520431 kubelet[2712]: W0516 16:37:38.520431 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.520491 kubelet[2712]: E0516 16:37:38.520439 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.520625 kubelet[2712]: E0516 16:37:38.520611 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.520625 kubelet[2712]: W0516 16:37:38.520621 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.520669 kubelet[2712]: E0516 16:37:38.520633 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.520883 kubelet[2712]: E0516 16:37:38.520859 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.520883 kubelet[2712]: W0516 16:37:38.520872 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.520931 kubelet[2712]: E0516 16:37:38.520884 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:38.521043 kubelet[2712]: E0516 16:37:38.521029 2712 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:37:38.521043 kubelet[2712]: W0516 16:37:38.521041 2712 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:37:38.521093 kubelet[2712]: E0516 16:37:38.521049 2712 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:37:39.209774 containerd[1606]: time="2025-05-16T16:37:39.209713360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:39.210530 containerd[1606]: time="2025-05-16T16:37:39.210488347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 16 16:37:39.211636 containerd[1606]: time="2025-05-16T16:37:39.211605222Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:39.213383 containerd[1606]: time="2025-05-16T16:37:39.213355386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:39.213900 containerd[1606]: time="2025-05-16T16:37:39.213857686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.616972143s" May 16 16:37:39.213937 containerd[1606]: time="2025-05-16T16:37:39.213900958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 16 16:37:39.215926 containerd[1606]: time="2025-05-16T16:37:39.215892388Z" level=info msg="CreateContainer within sandbox \"24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 16:37:39.224641 containerd[1606]: time="2025-05-16T16:37:39.224596857Z" level=info msg="Container 04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:39.232164 containerd[1606]: time="2025-05-16T16:37:39.232122865Z" level=info msg="CreateContainer within sandbox \"24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d\"" May 16 16:37:39.232681 containerd[1606]: time="2025-05-16T16:37:39.232649322Z" level=info msg="StartContainer for \"04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d\"" May 16 16:37:39.233927 containerd[1606]: time="2025-05-16T16:37:39.233884210Z" level=info msg="connecting to shim 04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d" address="unix:///run/containerd/s/0230b55fedd642a7d6e0353c1db6469180f64c1da9b12f7d949070579d049312" protocol=ttrpc version=3 May 16 16:37:39.260400 systemd[1]: Started cri-containerd-04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d.scope - libcontainer container 04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d. May 16 16:37:39.302325 containerd[1606]: time="2025-05-16T16:37:39.302256463Z" level=info msg="StartContainer for \"04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d\" returns successfully" May 16 16:37:39.312358 systemd[1]: cri-containerd-04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d.scope: Deactivated successfully. May 16 16:37:39.314068 containerd[1606]: time="2025-05-16T16:37:39.314031896Z" level=info msg="received exit event container_id:\"04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d\" id:\"04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d\" pid:3414 exited_at:{seconds:1747413459 nanos:313619144}" May 16 16:37:39.314220 containerd[1606]: time="2025-05-16T16:37:39.314183442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d\" id:\"04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d\" pid:3414 exited_at:{seconds:1747413459 nanos:313619144}" May 16 16:37:39.336938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04e7a43abecdffaf6dc9be25d25522357da61cc9f6b5210bb2c9e211bac4539d-rootfs.mount: Deactivated successfully. May 16 16:37:39.371052 kubelet[2712]: E0516 16:37:39.371005 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cf5pq" podUID="4a53a0ed-691f-460a-8f34-788759fa4d73" May 16 16:37:39.428800 kubelet[2712]: I0516 16:37:39.428757 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:37:39.429332 kubelet[2712]: E0516 16:37:39.429149 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:39.547430 kubelet[2712]: I0516 16:37:39.547235 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77898f4f66-pjq27" podStartSLOduration=3.025745039 podStartE2EDuration="6.547199528s" podCreationTimestamp="2025-05-16 16:37:33 +0000 UTC" firstStartedPulling="2025-05-16 16:37:34.075260891 +0000 UTC m=+16.794153702" lastFinishedPulling="2025-05-16 16:37:37.59671538 +0000 UTC m=+20.315608191" observedRunningTime="2025-05-16 16:37:38.43454229 +0000 UTC m=+21.153435101" watchObservedRunningTime="2025-05-16 16:37:39.547199528 +0000 UTC m=+22.266092339" May 16 16:37:40.433243 containerd[1606]: time="2025-05-16T16:37:40.433162237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 16:37:41.371348 kubelet[2712]: E0516 16:37:41.371262 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cf5pq" podUID="4a53a0ed-691f-460a-8f34-788759fa4d73" May 16 16:37:43.371646 kubelet[2712]: E0516 16:37:43.371578 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cf5pq" podUID="4a53a0ed-691f-460a-8f34-788759fa4d73" May 16 16:37:44.287863 containerd[1606]: time="2025-05-16T16:37:44.287805550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:44.289153 containerd[1606]: time="2025-05-16T16:37:44.289112498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 16 16:37:44.290288 containerd[1606]: time="2025-05-16T16:37:44.290258532Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:44.292255 containerd[1606]: time="2025-05-16T16:37:44.292206451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:44.292717 containerd[1606]: time="2025-05-16T16:37:44.292683371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.859472451s" May 16 16:37:44.292717 containerd[1606]: time="2025-05-16T16:37:44.292709470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 16 16:37:44.294686 containerd[1606]: time="2025-05-16T16:37:44.294655686Z" level=info msg="CreateContainer within sandbox \"24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 16:37:44.305066 containerd[1606]: time="2025-05-16T16:37:44.305008305Z" level=info msg="Container 7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:44.317691 containerd[1606]: time="2025-05-16T16:37:44.317649347Z" level=info msg="CreateContainer within sandbox \"24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60\"" May 16 16:37:44.318269 containerd[1606]: time="2025-05-16T16:37:44.318223721Z" level=info msg="StartContainer for \"7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60\"" May 16 16:37:44.319824 containerd[1606]: time="2025-05-16T16:37:44.319797763Z" level=info msg="connecting to shim 7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60" address="unix:///run/containerd/s/0230b55fedd642a7d6e0353c1db6469180f64c1da9b12f7d949070579d049312" protocol=ttrpc version=3 May 16 16:37:44.347439 systemd[1]: Started cri-containerd-7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60.scope - libcontainer container 7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60. May 16 16:37:44.389676 containerd[1606]: time="2025-05-16T16:37:44.389626366Z" level=info msg="StartContainer for \"7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60\" returns successfully" May 16 16:37:45.290421 systemd[1]: cri-containerd-7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60.scope: Deactivated successfully. May 16 16:37:45.290826 systemd[1]: cri-containerd-7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60.scope: Consumed 533ms CPU time, 178.7M memory peak, 3.2M read from disk, 170.9M written to disk. May 16 16:37:45.291338 containerd[1606]: time="2025-05-16T16:37:45.291302389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60\" id:\"7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60\" pid:3473 exited_at:{seconds:1747413465 nanos:290968468}" May 16 16:37:45.291675 containerd[1606]: time="2025-05-16T16:37:45.291346482Z" level=info msg="received exit event container_id:\"7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60\" id:\"7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60\" pid:3473 exited_at:{seconds:1747413465 nanos:290968468}" May 16 16:37:45.296803 kubelet[2712]: I0516 16:37:45.296757 2712 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 16:37:45.318798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7712cf8aa8f1e01ecb499aad1f768e6e5fe7f18087732b3b606d331c524cdd60-rootfs.mount: Deactivated successfully. May 16 16:37:45.335336 systemd[1]: Created slice kubepods-besteffort-pod6af1b68e_dfdc_4de3_8bba_b9af7ed32d69.slice - libcontainer container kubepods-besteffort-pod6af1b68e_dfdc_4de3_8bba_b9af7ed32d69.slice. May 16 16:37:45.350488 systemd[1]: Created slice kubepods-besteffort-podb95434f7_5cb4_461e_b037_cf900bb483de.slice - libcontainer container kubepods-besteffort-podb95434f7_5cb4_461e_b037_cf900bb483de.slice. May 16 16:37:45.360255 systemd[1]: Created slice kubepods-burstable-podd49f2241_0666_494a_ba02_67e70ef07b4a.slice - libcontainer container kubepods-burstable-podd49f2241_0666_494a_ba02_67e70ef07b4a.slice. May 16 16:37:45.368458 systemd[1]: Created slice kubepods-besteffort-pod4935dd62_d692_4f3f_b085_7e695611704c.slice - libcontainer container kubepods-besteffort-pod4935dd62_d692_4f3f_b085_7e695611704c.slice. May 16 16:37:45.377575 systemd[1]: Created slice kubepods-burstable-poda89ec786_83ae_4d2c_a9ed_aae32acf5fad.slice - libcontainer container kubepods-burstable-poda89ec786_83ae_4d2c_a9ed_aae32acf5fad.slice. May 16 16:37:45.385884 systemd[1]: Created slice kubepods-besteffort-pod9a58fb16_fb0c_422a_a589_02591230be6e.slice - libcontainer container kubepods-besteffort-pod9a58fb16_fb0c_422a_a589_02591230be6e.slice. May 16 16:37:45.393926 systemd[1]: Created slice kubepods-besteffort-pod92666134_ee96_4ab1_a528_86be174720fb.slice - libcontainer container kubepods-besteffort-pod92666134_ee96_4ab1_a528_86be174720fb.slice. May 16 16:37:45.401761 systemd[1]: Created slice kubepods-besteffort-podbf47523c_1e81_4bbe_a80b_55b0036c2140.slice - libcontainer container kubepods-besteffort-podbf47523c_1e81_4bbe_a80b_55b0036c2140.slice. May 16 16:37:45.407352 systemd[1]: Created slice kubepods-besteffort-pod4a53a0ed_691f_460a_8f34_788759fa4d73.slice - libcontainer container kubepods-besteffort-pod4a53a0ed_691f_460a_8f34_788759fa4d73.slice. May 16 16:37:45.408057 kubelet[2712]: I0516 16:37:45.407864 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d74c8\" (UniqueName: \"kubernetes.io/projected/92666134-ee96-4ab1-a528-86be174720fb-kube-api-access-d74c8\") pod \"calico-apiserver-84d465b4cc-jdqkb\" (UID: \"92666134-ee96-4ab1-a528-86be174720fb\") " pod="calico-apiserver/calico-apiserver-84d465b4cc-jdqkb" May 16 16:37:45.408057 kubelet[2712]: I0516 16:37:45.407893 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b29h4\" (UniqueName: \"kubernetes.io/projected/d49f2241-0666-494a-ba02-67e70ef07b4a-kube-api-access-b29h4\") pod \"coredns-7c65d6cfc9-mvzgl\" (UID: \"d49f2241-0666-494a-ba02-67e70ef07b4a\") " pod="kube-system/coredns-7c65d6cfc9-mvzgl" May 16 16:37:45.408057 kubelet[2712]: I0516 16:37:45.407907 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bf47523c-1e81-4bbe-a80b-55b0036c2140-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-bv4q6\" (UID: \"bf47523c-1e81-4bbe-a80b-55b0036c2140\") " pod="calico-system/goldmane-8f77d7b6c-bv4q6" May 16 16:37:45.408057 kubelet[2712]: I0516 16:37:45.407920 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af1b68e-dfdc-4de3-8bba-b9af7ed32d69-tigera-ca-bundle\") pod \"calico-kube-controllers-5487f9d78-q6gxw\" (UID: \"6af1b68e-dfdc-4de3-8bba-b9af7ed32d69\") " pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" May 16 16:37:45.408057 kubelet[2712]: I0516 16:37:45.407933 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlw5\" (UniqueName: \"kubernetes.io/projected/6af1b68e-dfdc-4de3-8bba-b9af7ed32d69-kube-api-access-bxlw5\") pod \"calico-kube-controllers-5487f9d78-q6gxw\" (UID: \"6af1b68e-dfdc-4de3-8bba-b9af7ed32d69\") " pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" May 16 16:37:45.408363 kubelet[2712]: I0516 16:37:45.407973 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d49f2241-0666-494a-ba02-67e70ef07b4a-config-volume\") pod \"coredns-7c65d6cfc9-mvzgl\" (UID: \"d49f2241-0666-494a-ba02-67e70ef07b4a\") " pod="kube-system/coredns-7c65d6cfc9-mvzgl" May 16 16:37:45.408363 kubelet[2712]: I0516 16:37:45.408003 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29f2s\" (UniqueName: \"kubernetes.io/projected/4935dd62-d692-4f3f-b085-7e695611704c-kube-api-access-29f2s\") pod \"calico-apiserver-5f85b45fd-cdvx9\" (UID: \"4935dd62-d692-4f3f-b085-7e695611704c\") " pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" May 16 16:37:45.408363 kubelet[2712]: I0516 16:37:45.408033 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf47523c-1e81-4bbe-a80b-55b0036c2140-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-bv4q6\" (UID: \"bf47523c-1e81-4bbe-a80b-55b0036c2140\") " pod="calico-system/goldmane-8f77d7b6c-bv4q6" May 16 16:37:45.408363 kubelet[2712]: I0516 16:37:45.408046 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dpnt\" (UniqueName: \"kubernetes.io/projected/9a58fb16-fb0c-422a-a589-02591230be6e-kube-api-access-8dpnt\") pod \"calico-apiserver-5f85b45fd-5gtdb\" (UID: \"9a58fb16-fb0c-422a-a589-02591230be6e\") " pod="calico-apiserver/calico-apiserver-5f85b45fd-5gtdb" May 16 16:37:45.408363 kubelet[2712]: I0516 16:37:45.408060 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-ca-bundle\") pod \"whisker-64587f7c68-brj6s\" (UID: \"b95434f7-5cb4-461e-b037-cf900bb483de\") " pod="calico-system/whisker-64587f7c68-brj6s" May 16 16:37:45.408528 kubelet[2712]: I0516 16:37:45.408074 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndjw\" (UniqueName: \"kubernetes.io/projected/b95434f7-5cb4-461e-b037-cf900bb483de-kube-api-access-lndjw\") pod \"whisker-64587f7c68-brj6s\" (UID: \"b95434f7-5cb4-461e-b037-cf900bb483de\") " pod="calico-system/whisker-64587f7c68-brj6s" May 16 16:37:45.408528 kubelet[2712]: I0516 16:37:45.408088 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4935dd62-d692-4f3f-b085-7e695611704c-calico-apiserver-certs\") pod \"calico-apiserver-5f85b45fd-cdvx9\" (UID: \"4935dd62-d692-4f3f-b085-7e695611704c\") " pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" May 16 16:37:45.408528 kubelet[2712]: I0516 16:37:45.408103 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a58fb16-fb0c-422a-a589-02591230be6e-calico-apiserver-certs\") pod \"calico-apiserver-5f85b45fd-5gtdb\" (UID: \"9a58fb16-fb0c-422a-a589-02591230be6e\") " pod="calico-apiserver/calico-apiserver-5f85b45fd-5gtdb" May 16 16:37:45.408528 kubelet[2712]: I0516 16:37:45.408116 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-backend-key-pair\") pod \"whisker-64587f7c68-brj6s\" (UID: \"b95434f7-5cb4-461e-b037-cf900bb483de\") " pod="calico-system/whisker-64587f7c68-brj6s" May 16 16:37:45.408528 kubelet[2712]: I0516 16:37:45.408132 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5s6\" (UniqueName: \"kubernetes.io/projected/a89ec786-83ae-4d2c-a9ed-aae32acf5fad-kube-api-access-hr5s6\") pod \"coredns-7c65d6cfc9-tj5l7\" (UID: \"a89ec786-83ae-4d2c-a9ed-aae32acf5fad\") " pod="kube-system/coredns-7c65d6cfc9-tj5l7" May 16 16:37:45.408648 kubelet[2712]: I0516 16:37:45.408146 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hrrj\" (UniqueName: \"kubernetes.io/projected/bf47523c-1e81-4bbe-a80b-55b0036c2140-kube-api-access-7hrrj\") pod \"goldmane-8f77d7b6c-bv4q6\" (UID: \"bf47523c-1e81-4bbe-a80b-55b0036c2140\") " pod="calico-system/goldmane-8f77d7b6c-bv4q6" May 16 16:37:45.408648 kubelet[2712]: I0516 16:37:45.408160 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a89ec786-83ae-4d2c-a9ed-aae32acf5fad-config-volume\") pod \"coredns-7c65d6cfc9-tj5l7\" (UID: \"a89ec786-83ae-4d2c-a9ed-aae32acf5fad\") " pod="kube-system/coredns-7c65d6cfc9-tj5l7" May 16 16:37:45.408648 kubelet[2712]: I0516 16:37:45.408172 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/92666134-ee96-4ab1-a528-86be174720fb-calico-apiserver-certs\") pod \"calico-apiserver-84d465b4cc-jdqkb\" (UID: \"92666134-ee96-4ab1-a528-86be174720fb\") " pod="calico-apiserver/calico-apiserver-84d465b4cc-jdqkb" May 16 16:37:45.408648 kubelet[2712]: I0516 16:37:45.408188 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf47523c-1e81-4bbe-a80b-55b0036c2140-config\") pod \"goldmane-8f77d7b6c-bv4q6\" (UID: \"bf47523c-1e81-4bbe-a80b-55b0036c2140\") " pod="calico-system/goldmane-8f77d7b6c-bv4q6" May 16 16:37:45.409800 containerd[1606]: time="2025-05-16T16:37:45.409761260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cf5pq,Uid:4a53a0ed-691f-460a-8f34-788759fa4d73,Namespace:calico-system,Attempt:0,}" May 16 16:37:45.645492 containerd[1606]: time="2025-05-16T16:37:45.645441395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5487f9d78-q6gxw,Uid:6af1b68e-dfdc-4de3-8bba-b9af7ed32d69,Namespace:calico-system,Attempt:0,}" May 16 16:37:45.656193 containerd[1606]: time="2025-05-16T16:37:45.656131914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64587f7c68-brj6s,Uid:b95434f7-5cb4-461e-b037-cf900bb483de,Namespace:calico-system,Attempt:0,}" May 16 16:37:45.664898 kubelet[2712]: E0516 16:37:45.664864 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:45.665631 containerd[1606]: time="2025-05-16T16:37:45.665549552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvzgl,Uid:d49f2241-0666-494a-ba02-67e70ef07b4a,Namespace:kube-system,Attempt:0,}" May 16 16:37:45.674538 containerd[1606]: time="2025-05-16T16:37:45.674484808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-cdvx9,Uid:4935dd62-d692-4f3f-b085-7e695611704c,Namespace:calico-apiserver,Attempt:0,}" May 16 16:37:45.682651 kubelet[2712]: E0516 16:37:45.682630 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:45.682996 containerd[1606]: time="2025-05-16T16:37:45.682948864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tj5l7,Uid:a89ec786-83ae-4d2c-a9ed-aae32acf5fad,Namespace:kube-system,Attempt:0,}" May 16 16:37:45.689505 containerd[1606]: time="2025-05-16T16:37:45.689453092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-5gtdb,Uid:9a58fb16-fb0c-422a-a589-02591230be6e,Namespace:calico-apiserver,Attempt:0,}" May 16 16:37:45.698074 containerd[1606]: time="2025-05-16T16:37:45.698031223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d465b4cc-jdqkb,Uid:92666134-ee96-4ab1-a528-86be174720fb,Namespace:calico-apiserver,Attempt:0,}" May 16 16:37:45.705562 containerd[1606]: time="2025-05-16T16:37:45.705517114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bv4q6,Uid:bf47523c-1e81-4bbe-a80b-55b0036c2140,Namespace:calico-system,Attempt:0,}" May 16 16:37:46.090301 containerd[1606]: time="2025-05-16T16:37:46.090234954Z" level=error msg="Failed to destroy network for sandbox \"f4e30256f15b989c5b272980a157add9c7ab9a9f7e38407918405f074603926f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.091503 containerd[1606]: time="2025-05-16T16:37:46.091451690Z" level=error msg="Failed to destroy network for sandbox \"849cf63c8aaff6847f5e4337eae2a56c4ff0ad9442805b09001a6d461b992560\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.093406 containerd[1606]: time="2025-05-16T16:37:46.093381591Z" level=error msg="Failed to destroy network for sandbox \"a6f61cd39d4fd6729e8e03d922f36bfea5a579fc6cbceea10ecd695b0ac242c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.094894 containerd[1606]: time="2025-05-16T16:37:46.094782745Z" level=error msg="Failed to destroy network for sandbox \"e2985e094364d2bc3507c66b00d5406a413ad1a6c44d4166a05057bf80b38d1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.096497 containerd[1606]: time="2025-05-16T16:37:46.096455631Z" level=error msg="Failed to destroy network for sandbox \"75a7a71384172bfc39328250a49e4c70a31e6e888eda9a79419a2582cb992bd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.121133 containerd[1606]: time="2025-05-16T16:37:46.121102316Z" level=error msg="Failed to destroy network for sandbox \"def6f4e6f2b143d4bedd6a9cc04061d64a1bbc0e1681f66f2ab7c1311653dc35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.140439 containerd[1606]: time="2025-05-16T16:37:46.140337260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5487f9d78-q6gxw,Uid:6af1b68e-dfdc-4de3-8bba-b9af7ed32d69,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f61cd39d4fd6729e8e03d922f36bfea5a579fc6cbceea10ecd695b0ac242c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.141200 containerd[1606]: time="2025-05-16T16:37:46.141150053Z" level=error msg="Failed to destroy network for sandbox \"029726a1fb339f3c05c1c94a3c709417e70caae0b13b3ade8b15227eeef6216a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.141503 containerd[1606]: time="2025-05-16T16:37:46.141465578Z" level=error msg="Failed to destroy network for sandbox \"e86d1fee5e4fa2bdfe85f00ceebfa80124a752489e2b8f5018790c1d2bfe9ec5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.142159 containerd[1606]: time="2025-05-16T16:37:46.141231427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64587f7c68-brj6s,Uid:b95434f7-5cb4-461e-b037-cf900bb483de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"def6f4e6f2b143d4bedd6a9cc04061d64a1bbc0e1681f66f2ab7c1311653dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.142294 containerd[1606]: time="2025-05-16T16:37:46.141340823Z" level=error msg="Failed to destroy network for sandbox \"28ce3f9da70d807236c0496a401248325da5662f243d4ee290483a7da281da06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.142460 containerd[1606]: time="2025-05-16T16:37:46.141346103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bv4q6,Uid:bf47523c-1e81-4bbe-a80b-55b0036c2140,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a7a71384172bfc39328250a49e4c70a31e6e888eda9a79419a2582cb992bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.142512 containerd[1606]: time="2025-05-16T16:37:46.141355180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d465b4cc-jdqkb,Uid:92666134-ee96-4ab1-a528-86be174720fb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2985e094364d2bc3507c66b00d5406a413ad1a6c44d4166a05057bf80b38d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.142512 containerd[1606]: time="2025-05-16T16:37:46.141367082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-5gtdb,Uid:9a58fb16-fb0c-422a-a589-02591230be6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"849cf63c8aaff6847f5e4337eae2a56c4ff0ad9442805b09001a6d461b992560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.142608 containerd[1606]: time="2025-05-16T16:37:46.142571596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cf5pq,Uid:4a53a0ed-691f-460a-8f34-788759fa4d73,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"029726a1fb339f3c05c1c94a3c709417e70caae0b13b3ade8b15227eeef6216a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.143001 containerd[1606]: time="2025-05-16T16:37:46.142947735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-cdvx9,Uid:4935dd62-d692-4f3f-b085-7e695611704c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e30256f15b989c5b272980a157add9c7ab9a9f7e38407918405f074603926f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.143866 containerd[1606]: time="2025-05-16T16:37:46.143823687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tj5l7,Uid:a89ec786-83ae-4d2c-a9ed-aae32acf5fad,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86d1fee5e4fa2bdfe85f00ceebfa80124a752489e2b8f5018790c1d2bfe9ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.144744 containerd[1606]: time="2025-05-16T16:37:46.144712093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvzgl,Uid:d49f2241-0666-494a-ba02-67e70ef07b4a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28ce3f9da70d807236c0496a401248325da5662f243d4ee290483a7da281da06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151458 kubelet[2712]: E0516 16:37:46.151115 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28ce3f9da70d807236c0496a401248325da5662f243d4ee290483a7da281da06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151458 kubelet[2712]: E0516 16:37:46.151110 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"849cf63c8aaff6847f5e4337eae2a56c4ff0ad9442805b09001a6d461b992560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151458 kubelet[2712]: E0516 16:37:46.151155 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e30256f15b989c5b272980a157add9c7ab9a9f7e38407918405f074603926f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151458 kubelet[2712]: E0516 16:37:46.151171 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86d1fee5e4fa2bdfe85f00ceebfa80124a752489e2b8f5018790c1d2bfe9ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151619 kubelet[2712]: E0516 16:37:46.151194 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e30256f15b989c5b272980a157add9c7ab9a9f7e38407918405f074603926f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" May 16 16:37:46.151619 kubelet[2712]: E0516 16:37:46.151200 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"849cf63c8aaff6847f5e4337eae2a56c4ff0ad9442805b09001a6d461b992560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f85b45fd-5gtdb" May 16 16:37:46.151619 kubelet[2712]: E0516 16:37:46.151215 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2985e094364d2bc3507c66b00d5406a413ad1a6c44d4166a05057bf80b38d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151619 kubelet[2712]: E0516 16:37:46.151216 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e30256f15b989c5b272980a157add9c7ab9a9f7e38407918405f074603926f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" May 16 16:37:46.151718 kubelet[2712]: E0516 16:37:46.151224 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"849cf63c8aaff6847f5e4337eae2a56c4ff0ad9442805b09001a6d461b992560\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f85b45fd-5gtdb" May 16 16:37:46.151718 kubelet[2712]: E0516 16:37:46.151223 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86d1fee5e4fa2bdfe85f00ceebfa80124a752489e2b8f5018790c1d2bfe9ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tj5l7" May 16 16:37:46.151718 kubelet[2712]: E0516 16:37:46.151247 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029726a1fb339f3c05c1c94a3c709417e70caae0b13b3ade8b15227eeef6216a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151718 kubelet[2712]: E0516 16:37:46.151261 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029726a1fb339f3c05c1c94a3c709417e70caae0b13b3ade8b15227eeef6216a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cf5pq" May 16 16:37:46.151832 kubelet[2712]: E0516 16:37:46.151154 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a7a71384172bfc39328250a49e4c70a31e6e888eda9a79419a2582cb992bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151832 kubelet[2712]: E0516 16:37:46.151275 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029726a1fb339f3c05c1c94a3c709417e70caae0b13b3ade8b15227eeef6216a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cf5pq" May 16 16:37:46.151832 kubelet[2712]: E0516 16:37:46.151295 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a7a71384172bfc39328250a49e4c70a31e6e888eda9a79419a2582cb992bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-bv4q6" May 16 16:37:46.151900 kubelet[2712]: E0516 16:37:46.151275 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f85b45fd-5gtdb_calico-apiserver(9a58fb16-fb0c-422a-a589-02591230be6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f85b45fd-5gtdb_calico-apiserver(9a58fb16-fb0c-422a-a589-02591230be6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"849cf63c8aaff6847f5e4337eae2a56c4ff0ad9442805b09001a6d461b992560\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f85b45fd-5gtdb" podUID="9a58fb16-fb0c-422a-a589-02591230be6e" May 16 16:37:46.151900 kubelet[2712]: E0516 16:37:46.151309 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a7a71384172bfc39328250a49e4c70a31e6e888eda9a79419a2582cb992bd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-bv4q6" May 16 16:37:46.151900 kubelet[2712]: E0516 16:37:46.151107 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f61cd39d4fd6729e8e03d922f36bfea5a579fc6cbceea10ecd695b0ac242c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.151992 kubelet[2712]: E0516 16:37:46.151317 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cf5pq_calico-system(4a53a0ed-691f-460a-8f34-788759fa4d73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cf5pq_calico-system(4a53a0ed-691f-460a-8f34-788759fa4d73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"029726a1fb339f3c05c1c94a3c709417e70caae0b13b3ade8b15227eeef6216a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cf5pq" podUID="4a53a0ed-691f-460a-8f34-788759fa4d73" May 16 16:37:46.151992 kubelet[2712]: E0516 16:37:46.151334 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f61cd39d4fd6729e8e03d922f36bfea5a579fc6cbceea10ecd695b0ac242c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" May 16 16:37:46.151992 kubelet[2712]: E0516 16:37:46.151266 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f85b45fd-cdvx9_calico-apiserver(4935dd62-d692-4f3f-b085-7e695611704c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f85b45fd-cdvx9_calico-apiserver(4935dd62-d692-4f3f-b085-7e695611704c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4e30256f15b989c5b272980a157add9c7ab9a9f7e38407918405f074603926f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" podUID="4935dd62-d692-4f3f-b085-7e695611704c" May 16 16:37:46.152086 kubelet[2712]: E0516 16:37:46.151102 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def6f4e6f2b143d4bedd6a9cc04061d64a1bbc0e1681f66f2ab7c1311653dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:46.152086 kubelet[2712]: E0516 16:37:46.151349 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6f61cd39d4fd6729e8e03d922f36bfea5a579fc6cbceea10ecd695b0ac242c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" May 16 16:37:46.152086 kubelet[2712]: E0516 16:37:46.151363 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def6f4e6f2b143d4bedd6a9cc04061d64a1bbc0e1681f66f2ab7c1311653dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64587f7c68-brj6s" May 16 16:37:46.152086 kubelet[2712]: E0516 16:37:46.151193 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28ce3f9da70d807236c0496a401248325da5662f243d4ee290483a7da281da06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mvzgl" May 16 16:37:46.152178 kubelet[2712]: E0516 16:37:46.151373 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5487f9d78-q6gxw_calico-system(6af1b68e-dfdc-4de3-8bba-b9af7ed32d69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5487f9d78-q6gxw_calico-system(6af1b68e-dfdc-4de3-8bba-b9af7ed32d69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6f61cd39d4fd6729e8e03d922f36bfea5a579fc6cbceea10ecd695b0ac242c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" podUID="6af1b68e-dfdc-4de3-8bba-b9af7ed32d69" May 16 16:37:46.152178 kubelet[2712]: E0516 16:37:46.151254 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86d1fee5e4fa2bdfe85f00ceebfa80124a752489e2b8f5018790c1d2bfe9ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tj5l7" May 16 16:37:46.152178 kubelet[2712]: E0516 16:37:46.151384 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"def6f4e6f2b143d4bedd6a9cc04061d64a1bbc0e1681f66f2ab7c1311653dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64587f7c68-brj6s" May 16 16:37:46.152266 kubelet[2712]: E0516 16:37:46.151421 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tj5l7_kube-system(a89ec786-83ae-4d2c-a9ed-aae32acf5fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tj5l7_kube-system(a89ec786-83ae-4d2c-a9ed-aae32acf5fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e86d1fee5e4fa2bdfe85f00ceebfa80124a752489e2b8f5018790c1d2bfe9ec5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tj5l7" podUID="a89ec786-83ae-4d2c-a9ed-aae32acf5fad" May 16 16:37:46.152266 kubelet[2712]: E0516 16:37:46.151391 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28ce3f9da70d807236c0496a401248325da5662f243d4ee290483a7da281da06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mvzgl" May 16 16:37:46.152266 kubelet[2712]: E0516 16:37:46.151459 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mvzgl_kube-system(d49f2241-0666-494a-ba02-67e70ef07b4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mvzgl_kube-system(d49f2241-0666-494a-ba02-67e70ef07b4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28ce3f9da70d807236c0496a401248325da5662f243d4ee290483a7da281da06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mvzgl" podUID="d49f2241-0666-494a-ba02-67e70ef07b4a" May 16 16:37:46.152389 kubelet[2712]: E0516 16:37:46.151233 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2985e094364d2bc3507c66b00d5406a413ad1a6c44d4166a05057bf80b38d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84d465b4cc-jdqkb" May 16 16:37:46.152389 kubelet[2712]: E0516 16:37:46.151487 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2985e094364d2bc3507c66b00d5406a413ad1a6c44d4166a05057bf80b38d1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84d465b4cc-jdqkb" May 16 16:37:46.152389 kubelet[2712]: E0516 16:37:46.151508 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84d465b4cc-jdqkb_calico-apiserver(92666134-ee96-4ab1-a528-86be174720fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84d465b4cc-jdqkb_calico-apiserver(92666134-ee96-4ab1-a528-86be174720fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2985e094364d2bc3507c66b00d5406a413ad1a6c44d4166a05057bf80b38d1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84d465b4cc-jdqkb" podUID="92666134-ee96-4ab1-a528-86be174720fb" May 16 16:37:46.152470 kubelet[2712]: E0516 16:37:46.151355 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-bv4q6_calico-system(bf47523c-1e81-4bbe-a80b-55b0036c2140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-bv4q6_calico-system(bf47523c-1e81-4bbe-a80b-55b0036c2140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75a7a71384172bfc39328250a49e4c70a31e6e888eda9a79419a2582cb992bd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-bv4q6" podUID="bf47523c-1e81-4bbe-a80b-55b0036c2140" May 16 16:37:46.152470 kubelet[2712]: E0516 16:37:46.151436 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64587f7c68-brj6s_calico-system(b95434f7-5cb4-461e-b037-cf900bb483de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64587f7c68-brj6s_calico-system(b95434f7-5cb4-461e-b037-cf900bb483de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"def6f4e6f2b143d4bedd6a9cc04061d64a1bbc0e1681f66f2ab7c1311653dc35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64587f7c68-brj6s" podUID="b95434f7-5cb4-461e-b037-cf900bb483de" May 16 16:37:46.451270 containerd[1606]: time="2025-05-16T16:37:46.451136327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 16:37:48.740902 kubelet[2712]: I0516 16:37:48.740848 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:37:48.741444 kubelet[2712]: E0516 16:37:48.741255 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:49.454575 kubelet[2712]: E0516 16:37:49.454529 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:56.068932 systemd[1]: Started sshd@7-10.0.0.37:22-10.0.0.1:40792.service - OpenSSH per-connection server daemon (10.0.0.1:40792). May 16 16:37:56.122842 sshd[3815]: Accepted publickey for core from 10.0.0.1 port 40792 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:37:56.125054 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:37:56.132395 systemd-logind[1582]: New session 8 of user core. May 16 16:37:56.139466 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 16:37:56.291946 sshd[3817]: Connection closed by 10.0.0.1 port 40792 May 16 16:37:56.290728 sshd-session[3815]: pam_unix(sshd:session): session closed for user core May 16 16:37:56.296018 systemd[1]: sshd@7-10.0.0.37:22-10.0.0.1:40792.service: Deactivated successfully. May 16 16:37:56.298252 systemd[1]: session-8.scope: Deactivated successfully. May 16 16:37:56.299982 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. May 16 16:37:56.301313 systemd-logind[1582]: Removed session 8. May 16 16:37:56.892265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226172368.mount: Deactivated successfully. May 16 16:37:58.692868 kubelet[2712]: E0516 16:37:58.692828 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:37:58.693586 containerd[1606]: time="2025-05-16T16:37:58.693353586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-cdvx9,Uid:4935dd62-d692-4f3f-b085-7e695611704c,Namespace:calico-apiserver,Attempt:0,}" May 16 16:37:58.694338 containerd[1606]: time="2025-05-16T16:37:58.694131619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tj5l7,Uid:a89ec786-83ae-4d2c-a9ed-aae32acf5fad,Namespace:kube-system,Attempt:0,}" May 16 16:37:58.694949 containerd[1606]: time="2025-05-16T16:37:58.694831265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5487f9d78-q6gxw,Uid:6af1b68e-dfdc-4de3-8bba-b9af7ed32d69,Namespace:calico-system,Attempt:0,}" May 16 16:37:58.695444 containerd[1606]: time="2025-05-16T16:37:58.695357204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:58.700863 containerd[1606]: time="2025-05-16T16:37:58.700806954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 16 16:37:58.718086 containerd[1606]: time="2025-05-16T16:37:58.717898671Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:58.725900 containerd[1606]: time="2025-05-16T16:37:58.725835069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:37:58.726192 containerd[1606]: time="2025-05-16T16:37:58.726046947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 12.2748687s" May 16 16:37:58.726192 containerd[1606]: time="2025-05-16T16:37:58.726089046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 16 16:37:58.765334 containerd[1606]: time="2025-05-16T16:37:58.765273175Z" level=info msg="CreateContainer within sandbox \"24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 16:37:58.787976 containerd[1606]: time="2025-05-16T16:37:58.787875286Z" level=error msg="Failed to destroy network for sandbox \"f4758bf5cf6e4832b6426e7f8b30cc251759384f8380db3c4d49d83f627d38f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.793748 containerd[1606]: time="2025-05-16T16:37:58.793691886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-cdvx9,Uid:4935dd62-d692-4f3f-b085-7e695611704c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4758bf5cf6e4832b6426e7f8b30cc251759384f8380db3c4d49d83f627d38f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.793999 containerd[1606]: time="2025-05-16T16:37:58.793738575Z" level=error msg="Failed to destroy network for sandbox \"0d584ec01ef4e63cb8d5a5e4b36dfb5ec1ee2fa1048a0dc7ee36e66977731b38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.794322 kubelet[2712]: E0516 16:37:58.794260 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4758bf5cf6e4832b6426e7f8b30cc251759384f8380db3c4d49d83f627d38f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.794409 kubelet[2712]: E0516 16:37:58.794351 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4758bf5cf6e4832b6426e7f8b30cc251759384f8380db3c4d49d83f627d38f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" May 16 16:37:58.794409 kubelet[2712]: E0516 16:37:58.794375 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4758bf5cf6e4832b6426e7f8b30cc251759384f8380db3c4d49d83f627d38f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" May 16 16:37:58.795323 kubelet[2712]: E0516 16:37:58.795239 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f85b45fd-cdvx9_calico-apiserver(4935dd62-d692-4f3f-b085-7e695611704c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f85b45fd-cdvx9_calico-apiserver(4935dd62-d692-4f3f-b085-7e695611704c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4758bf5cf6e4832b6426e7f8b30cc251759384f8380db3c4d49d83f627d38f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" podUID="4935dd62-d692-4f3f-b085-7e695611704c" May 16 16:37:58.795521 containerd[1606]: time="2025-05-16T16:37:58.795441196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tj5l7,Uid:a89ec786-83ae-4d2c-a9ed-aae32acf5fad,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d584ec01ef4e63cb8d5a5e4b36dfb5ec1ee2fa1048a0dc7ee36e66977731b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.795900 kubelet[2712]: E0516 16:37:58.795780 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d584ec01ef4e63cb8d5a5e4b36dfb5ec1ee2fa1048a0dc7ee36e66977731b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.795900 kubelet[2712]: E0516 16:37:58.795883 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d584ec01ef4e63cb8d5a5e4b36dfb5ec1ee2fa1048a0dc7ee36e66977731b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tj5l7" May 16 16:37:58.795900 kubelet[2712]: E0516 16:37:58.795904 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d584ec01ef4e63cb8d5a5e4b36dfb5ec1ee2fa1048a0dc7ee36e66977731b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tj5l7" May 16 16:37:58.796331 kubelet[2712]: E0516 16:37:58.795951 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tj5l7_kube-system(a89ec786-83ae-4d2c-a9ed-aae32acf5fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tj5l7_kube-system(a89ec786-83ae-4d2c-a9ed-aae32acf5fad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d584ec01ef4e63cb8d5a5e4b36dfb5ec1ee2fa1048a0dc7ee36e66977731b38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tj5l7" podUID="a89ec786-83ae-4d2c-a9ed-aae32acf5fad" May 16 16:37:58.802500 containerd[1606]: time="2025-05-16T16:37:58.802447674Z" level=error msg="Failed to destroy network for sandbox \"92ebb527de673db9a6ea135f76243dd1d1944e272be8349ac252e49ac3454ff3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.803662 containerd[1606]: time="2025-05-16T16:37:58.803612936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5487f9d78-q6gxw,Uid:6af1b68e-dfdc-4de3-8bba-b9af7ed32d69,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92ebb527de673db9a6ea135f76243dd1d1944e272be8349ac252e49ac3454ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.803756 containerd[1606]: time="2025-05-16T16:37:58.803728514Z" level=info msg="Container 8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3: CDI devices from CRI Config.CDIDevices: []" May 16 16:37:58.803846 kubelet[2712]: E0516 16:37:58.803817 2712 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92ebb527de673db9a6ea135f76243dd1d1944e272be8349ac252e49ac3454ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:37:58.803889 kubelet[2712]: E0516 16:37:58.803857 2712 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92ebb527de673db9a6ea135f76243dd1d1944e272be8349ac252e49ac3454ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" May 16 16:37:58.803917 kubelet[2712]: E0516 16:37:58.803885 2712 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92ebb527de673db9a6ea135f76243dd1d1944e272be8349ac252e49ac3454ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" May 16 16:37:58.803944 kubelet[2712]: E0516 16:37:58.803921 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5487f9d78-q6gxw_calico-system(6af1b68e-dfdc-4de3-8bba-b9af7ed32d69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5487f9d78-q6gxw_calico-system(6af1b68e-dfdc-4de3-8bba-b9af7ed32d69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92ebb527de673db9a6ea135f76243dd1d1944e272be8349ac252e49ac3454ff3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" podUID="6af1b68e-dfdc-4de3-8bba-b9af7ed32d69" May 16 16:37:58.820071 containerd[1606]: time="2025-05-16T16:37:58.820006040Z" level=info msg="CreateContainer within sandbox \"24a3e5070a518eedb3b4da92173045b82ba7779f67fb6f7e494b2e184b87c28a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3\"" May 16 16:37:58.820601 containerd[1606]: time="2025-05-16T16:37:58.820544431Z" level=info msg="StartContainer for \"8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3\"" May 16 16:37:58.822702 containerd[1606]: time="2025-05-16T16:37:58.822661033Z" level=info msg="connecting to shim 8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3" address="unix:///run/containerd/s/0230b55fedd642a7d6e0353c1db6469180f64c1da9b12f7d949070579d049312" protocol=ttrpc version=3 May 16 16:37:58.981419 systemd[1]: Started cri-containerd-8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3.scope - libcontainer container 8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3. May 16 16:37:59.058928 containerd[1606]: time="2025-05-16T16:37:59.058881769Z" level=info msg="StartContainer for \"8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3\" returns successfully" May 16 16:37:59.135799 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 16:37:59.135990 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 16:37:59.390386 kubelet[2712]: I0516 16:37:59.390335 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-backend-key-pair\") pod \"b95434f7-5cb4-461e-b037-cf900bb483de\" (UID: \"b95434f7-5cb4-461e-b037-cf900bb483de\") " May 16 16:37:59.390386 kubelet[2712]: I0516 16:37:59.390378 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-ca-bundle\") pod \"b95434f7-5cb4-461e-b037-cf900bb483de\" (UID: \"b95434f7-5cb4-461e-b037-cf900bb483de\") " May 16 16:37:59.390386 kubelet[2712]: I0516 16:37:59.390396 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lndjw\" (UniqueName: \"kubernetes.io/projected/b95434f7-5cb4-461e-b037-cf900bb483de-kube-api-access-lndjw\") pod \"b95434f7-5cb4-461e-b037-cf900bb483de\" (UID: \"b95434f7-5cb4-461e-b037-cf900bb483de\") " May 16 16:37:59.390997 kubelet[2712]: I0516 16:37:59.390961 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b95434f7-5cb4-461e-b037-cf900bb483de" (UID: "b95434f7-5cb4-461e-b037-cf900bb483de"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 16:37:59.393979 kubelet[2712]: I0516 16:37:59.393949 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95434f7-5cb4-461e-b037-cf900bb483de-kube-api-access-lndjw" (OuterVolumeSpecName: "kube-api-access-lndjw") pod "b95434f7-5cb4-461e-b037-cf900bb483de" (UID: "b95434f7-5cb4-461e-b037-cf900bb483de"). InnerVolumeSpecName "kube-api-access-lndjw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:37:59.394309 kubelet[2712]: I0516 16:37:59.394252 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b95434f7-5cb4-461e-b037-cf900bb483de" (UID: "b95434f7-5cb4-461e-b037-cf900bb483de"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 16:37:59.490770 kubelet[2712]: I0516 16:37:59.490724 2712 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 16 16:37:59.490770 kubelet[2712]: I0516 16:37:59.490754 2712 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95434f7-5cb4-461e-b037-cf900bb483de-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 16 16:37:59.490770 kubelet[2712]: I0516 16:37:59.490766 2712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lndjw\" (UniqueName: \"kubernetes.io/projected/b95434f7-5cb4-461e-b037-cf900bb483de-kube-api-access-lndjw\") on node \"localhost\" DevicePath \"\"" May 16 16:37:59.701075 systemd[1]: run-netns-cni\x2d387677f2\x2d1b33\x2d9fe3\x2de938\x2db05524fbf2ec.mount: Deactivated successfully. May 16 16:37:59.701192 systemd[1]: run-netns-cni\x2df3c2431d\x2d35b4\x2dfd49\x2df123\x2dea1ecd485716.mount: Deactivated successfully. May 16 16:37:59.701258 systemd[1]: run-netns-cni\x2dd7444c3e\x2d0d48\x2db11c\x2d73b9\x2db661dd4abbc3.mount: Deactivated successfully. May 16 16:37:59.701340 systemd[1]: var-lib-kubelet-pods-b95434f7\x2d5cb4\x2d461e\x2db037\x2dcf900bb483de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlndjw.mount: Deactivated successfully. May 16 16:37:59.701412 systemd[1]: var-lib-kubelet-pods-b95434f7\x2d5cb4\x2d461e\x2db037\x2dcf900bb483de-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 16:37:59.717998 systemd[1]: Removed slice kubepods-besteffort-podb95434f7_5cb4_461e_b037_cf900bb483de.slice - libcontainer container kubepods-besteffort-podb95434f7_5cb4_461e_b037_cf900bb483de.slice. May 16 16:37:59.740060 kubelet[2712]: I0516 16:37:59.739976 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-brmm8" podStartSLOduration=2.284604123 podStartE2EDuration="26.739954742s" podCreationTimestamp="2025-05-16 16:37:33 +0000 UTC" firstStartedPulling="2025-05-16 16:37:34.280597119 +0000 UTC m=+16.999489931" lastFinishedPulling="2025-05-16 16:37:58.735947739 +0000 UTC m=+41.454840550" observedRunningTime="2025-05-16 16:37:59.727828337 +0000 UTC m=+42.446721149" watchObservedRunningTime="2025-05-16 16:37:59.739954742 +0000 UTC m=+42.458847543" May 16 16:37:59.775462 systemd[1]: Created slice kubepods-besteffort-podd1830161_185c_4edf_ba88_ac20dff9bb5d.slice - libcontainer container kubepods-besteffort-podd1830161_185c_4edf_ba88_ac20dff9bb5d.slice. May 16 16:37:59.893109 kubelet[2712]: I0516 16:37:59.893039 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kf26\" (UniqueName: \"kubernetes.io/projected/d1830161-185c-4edf-ba88-ac20dff9bb5d-kube-api-access-2kf26\") pod \"whisker-5784d44d8b-w7g52\" (UID: \"d1830161-185c-4edf-ba88-ac20dff9bb5d\") " pod="calico-system/whisker-5784d44d8b-w7g52" May 16 16:37:59.893309 kubelet[2712]: I0516 16:37:59.893093 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1830161-185c-4edf-ba88-ac20dff9bb5d-whisker-backend-key-pair\") pod \"whisker-5784d44d8b-w7g52\" (UID: \"d1830161-185c-4edf-ba88-ac20dff9bb5d\") " pod="calico-system/whisker-5784d44d8b-w7g52" May 16 16:37:59.893309 kubelet[2712]: I0516 16:37:59.893152 2712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1830161-185c-4edf-ba88-ac20dff9bb5d-whisker-ca-bundle\") pod \"whisker-5784d44d8b-w7g52\" (UID: \"d1830161-185c-4edf-ba88-ac20dff9bb5d\") " pod="calico-system/whisker-5784d44d8b-w7g52" May 16 16:38:00.081299 containerd[1606]: time="2025-05-16T16:38:00.081244099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5784d44d8b-w7g52,Uid:d1830161-185c-4edf-ba88-ac20dff9bb5d,Namespace:calico-system,Attempt:0,}" May 16 16:38:00.280182 systemd-networkd[1494]: cali9dbf5211bc4: Link UP May 16 16:38:00.280976 systemd-networkd[1494]: cali9dbf5211bc4: Gained carrier May 16 16:38:00.296057 containerd[1606]: 2025-05-16 16:38:00.161 [INFO][4000] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 16:38:00.296057 containerd[1606]: 2025-05-16 16:38:00.177 [INFO][4000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5784d44d8b--w7g52-eth0 whisker-5784d44d8b- calico-system d1830161-185c-4edf-ba88-ac20dff9bb5d 975 0 2025-05-16 16:37:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5784d44d8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5784d44d8b-w7g52 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9dbf5211bc4 [] [] }} ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-" May 16 16:38:00.296057 containerd[1606]: 2025-05-16 16:38:00.177 [INFO][4000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" May 16 16:38:00.296057 containerd[1606]: 2025-05-16 16:38:00.240 [INFO][4016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" HandleID="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Workload="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.241 [INFO][4016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" HandleID="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Workload="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059c780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5784d44d8b-w7g52", "timestamp":"2025-05-16 16:38:00.240760175 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.241 [INFO][4016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.241 [INFO][4016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.241 [INFO][4016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.248 [INFO][4016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" host="localhost" May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.253 [INFO][4016] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.257 [INFO][4016] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.258 [INFO][4016] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.259 [INFO][4016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.296330 containerd[1606]: 2025-05-16 16:38:00.260 [INFO][4016] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" host="localhost" May 16 16:38:00.296639 containerd[1606]: 2025-05-16 16:38:00.261 [INFO][4016] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84 May 16 16:38:00.296639 containerd[1606]: 2025-05-16 16:38:00.265 [INFO][4016] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" host="localhost" May 16 16:38:00.296639 containerd[1606]: 2025-05-16 16:38:00.269 [INFO][4016] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" host="localhost" May 16 16:38:00.296639 containerd[1606]: 2025-05-16 16:38:00.269 [INFO][4016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" host="localhost" May 16 16:38:00.296639 containerd[1606]: 2025-05-16 16:38:00.269 [INFO][4016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:00.296639 containerd[1606]: 2025-05-16 16:38:00.269 [INFO][4016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" HandleID="k8s-pod-network.1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Workload="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" May 16 16:38:00.296807 containerd[1606]: 2025-05-16 16:38:00.272 [INFO][4000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5784d44d8b--w7g52-eth0", GenerateName:"whisker-5784d44d8b-", Namespace:"calico-system", SelfLink:"", UID:"d1830161-185c-4edf-ba88-ac20dff9bb5d", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5784d44d8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5784d44d8b-w7g52", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9dbf5211bc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.296807 containerd[1606]: 2025-05-16 16:38:00.273 [INFO][4000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" May 16 16:38:00.296915 containerd[1606]: 2025-05-16 16:38:00.273 [INFO][4000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dbf5211bc4 ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" May 16 16:38:00.296915 containerd[1606]: 2025-05-16 16:38:00.280 [INFO][4000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" May 16 16:38:00.296973 containerd[1606]: 2025-05-16 16:38:00.281 [INFO][4000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5784d44d8b--w7g52-eth0", GenerateName:"whisker-5784d44d8b-", Namespace:"calico-system", SelfLink:"", UID:"d1830161-185c-4edf-ba88-ac20dff9bb5d", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5784d44d8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84", Pod:"whisker-5784d44d8b-w7g52", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9dbf5211bc4", MAC:"ba:36:74:c6:b3:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.297050 containerd[1606]: 2025-05-16 16:38:00.292 [INFO][4000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" Namespace="calico-system" Pod="whisker-5784d44d8b-w7g52" WorkloadEndpoint="localhost-k8s-whisker--5784d44d8b--w7g52-eth0" May 16 16:38:00.372774 kubelet[2712]: E0516 16:38:00.372658 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:00.373874 containerd[1606]: time="2025-05-16T16:38:00.373104733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d465b4cc-jdqkb,Uid:92666134-ee96-4ab1-a528-86be174720fb,Namespace:calico-apiserver,Attempt:0,}" May 16 16:38:00.374295 containerd[1606]: time="2025-05-16T16:38:00.374252591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvzgl,Uid:d49f2241-0666-494a-ba02-67e70ef07b4a,Namespace:kube-system,Attempt:0,}" May 16 16:38:00.375450 containerd[1606]: time="2025-05-16T16:38:00.375424194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-5gtdb,Uid:9a58fb16-fb0c-422a-a589-02591230be6e,Namespace:calico-apiserver,Attempt:0,}" May 16 16:38:00.375943 containerd[1606]: time="2025-05-16T16:38:00.375872707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bv4q6,Uid:bf47523c-1e81-4bbe-a80b-55b0036c2140,Namespace:calico-system,Attempt:0,}" May 16 16:38:00.426661 containerd[1606]: time="2025-05-16T16:38:00.426601448Z" level=info msg="connecting to shim 1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84" address="unix:///run/containerd/s/5a1a8e5c6a518516046f80ec2affca7e60f6c74e907927f50582c965628e5860" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:00.651780 systemd[1]: Started cri-containerd-1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84.scope - libcontainer container 1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84. May 16 16:38:00.672914 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:00.717253 systemd-networkd[1494]: calic93e18abd26: Link UP May 16 16:38:00.718454 systemd-networkd[1494]: calic93e18abd26: Gained carrier May 16 16:38:00.734612 containerd[1606]: 2025-05-16 16:38:00.484 [INFO][4129] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 16:38:00.734612 containerd[1606]: 2025-05-16 16:38:00.556 [INFO][4129] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0 coredns-7c65d6cfc9- kube-system d49f2241-0666-494a-ba02-67e70ef07b4a 830 0 2025-05-16 16:37:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mvzgl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic93e18abd26 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-" May 16 16:38:00.734612 containerd[1606]: 2025-05-16 16:38:00.556 [INFO][4129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" May 16 16:38:00.734612 containerd[1606]: 2025-05-16 16:38:00.633 [INFO][4206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" HandleID="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Workload="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.633 [INFO][4206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" HandleID="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Workload="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000351610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mvzgl", "timestamp":"2025-05-16 16:38:00.633412519 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.633 [INFO][4206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.633 [INFO][4206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.633 [INFO][4206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.643 [INFO][4206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" host="localhost" May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.653 [INFO][4206] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.660 [INFO][4206] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.665 [INFO][4206] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.669 [INFO][4206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.734853 containerd[1606]: 2025-05-16 16:38:00.669 [INFO][4206] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" host="localhost" May 16 16:38:00.735063 containerd[1606]: 2025-05-16 16:38:00.671 [INFO][4206] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f May 16 16:38:00.735063 containerd[1606]: 2025-05-16 16:38:00.676 [INFO][4206] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" host="localhost" May 16 16:38:00.735063 containerd[1606]: 2025-05-16 16:38:00.708 [INFO][4206] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" host="localhost" May 16 16:38:00.735063 containerd[1606]: 2025-05-16 16:38:00.708 [INFO][4206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" host="localhost" May 16 16:38:00.735063 containerd[1606]: 2025-05-16 16:38:00.709 [INFO][4206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:00.735063 containerd[1606]: 2025-05-16 16:38:00.709 [INFO][4206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" HandleID="k8s-pod-network.990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Workload="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" May 16 16:38:00.735205 containerd[1606]: 2025-05-16 16:38:00.712 [INFO][4129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d49f2241-0666-494a-ba02-67e70ef07b4a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mvzgl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic93e18abd26", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.735357 containerd[1606]: 2025-05-16 16:38:00.712 [INFO][4129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" May 16 16:38:00.735357 containerd[1606]: 2025-05-16 16:38:00.712 [INFO][4129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic93e18abd26 ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" May 16 16:38:00.735357 containerd[1606]: 2025-05-16 16:38:00.718 [INFO][4129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" May 16 16:38:00.735429 containerd[1606]: 2025-05-16 16:38:00.719 [INFO][4129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d49f2241-0666-494a-ba02-67e70ef07b4a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f", Pod:"coredns-7c65d6cfc9-mvzgl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic93e18abd26", MAC:"3a:80:22:f3:72:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.735429 containerd[1606]: 2025-05-16 16:38:00.729 [INFO][4129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mvzgl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mvzgl-eth0" May 16 16:38:00.740303 containerd[1606]: time="2025-05-16T16:38:00.740078024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5784d44d8b-w7g52,Uid:d1830161-185c-4edf-ba88-ac20dff9bb5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1af45da010a1640248180694def0df08506143ca10f84afdc3b4c790affe0a84\"" May 16 16:38:00.742149 containerd[1606]: time="2025-05-16T16:38:00.742093193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 16:38:00.768631 containerd[1606]: time="2025-05-16T16:38:00.768523547Z" level=info msg="connecting to shim 990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f" address="unix:///run/containerd/s/3e6d232e6759bff0ed65fffcbcbdf75009345b56996b42840e8daf6482cc5c6f" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:00.792473 systemd-networkd[1494]: cali0bdb184a241: Link UP May 16 16:38:00.793333 systemd-networkd[1494]: cali0bdb184a241: Gained carrier May 16 16:38:00.808435 systemd[1]: Started cri-containerd-990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f.scope - libcontainer container 990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f. May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.608 [INFO][4143] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.617 [INFO][4143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0 calico-apiserver-5f85b45fd- calico-apiserver 9a58fb16-fb0c-422a-a589-02591230be6e 832 0 2025-05-16 16:37:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f85b45fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f85b45fd-5gtdb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0bdb184a241 [] [] }} ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.617 [INFO][4143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.696 [INFO][4244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" HandleID="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Workload="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.696 [INFO][4244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" HandleID="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Workload="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f85b45fd-5gtdb", "timestamp":"2025-05-16 16:38:00.696274724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.696 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.709 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.709 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.744 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.750 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.764 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.767 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.770 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.770 [INFO][4244] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.772 [INFO][4244] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344 May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.777 [INFO][4244] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.784 [INFO][4244] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.784 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" host="localhost" May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.784 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:00.816430 containerd[1606]: 2025-05-16 16:38:00.784 [INFO][4244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" HandleID="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Workload="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:00.817054 containerd[1606]: 2025-05-16 16:38:00.790 [INFO][4143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0", GenerateName:"calico-apiserver-5f85b45fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a58fb16-fb0c-422a-a589-02591230be6e", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f85b45fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f85b45fd-5gtdb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0bdb184a241", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.817054 containerd[1606]: 2025-05-16 16:38:00.790 [INFO][4143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:00.817054 containerd[1606]: 2025-05-16 16:38:00.790 [INFO][4143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bdb184a241 ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:00.817054 containerd[1606]: 2025-05-16 16:38:00.803 [INFO][4143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:00.817054 containerd[1606]: 2025-05-16 16:38:00.804 [INFO][4143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0", GenerateName:"calico-apiserver-5f85b45fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a58fb16-fb0c-422a-a589-02591230be6e", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f85b45fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344", Pod:"calico-apiserver-5f85b45fd-5gtdb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0bdb184a241", MAC:"12:16:8c:bb:f0:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.817054 containerd[1606]: 2025-05-16 16:38:00.812 [INFO][4143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-5gtdb" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:00.826971 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:00.846272 containerd[1606]: time="2025-05-16T16:38:00.845586633Z" level=info msg="connecting to shim 9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" address="unix:///run/containerd/s/c0486373826b8792950524fbbed53b16d4f3e21ed8db782460fc1112dc77b590" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:00.874433 containerd[1606]: time="2025-05-16T16:38:00.874396994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mvzgl,Uid:d49f2241-0666-494a-ba02-67e70ef07b4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f\"" May 16 16:38:00.875676 kubelet[2712]: E0516 16:38:00.875654 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:00.879035 containerd[1606]: time="2025-05-16T16:38:00.879007301Z" level=info msg="CreateContainer within sandbox \"990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:38:00.880461 systemd[1]: Started cri-containerd-9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344.scope - libcontainer container 9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344. May 16 16:38:00.893626 containerd[1606]: time="2025-05-16T16:38:00.893568159Z" level=info msg="Container f796600c2fd3ce041ab3b4476aa5cf9210f737933a7eba0123710c087b7c2ff2: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:00.897335 systemd-networkd[1494]: calif8a8723c262: Link UP May 16 16:38:00.898405 systemd-networkd[1494]: calif8a8723c262: Gained carrier May 16 16:38:00.903057 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:00.904090 containerd[1606]: time="2025-05-16T16:38:00.904054997Z" level=info msg="CreateContainer within sandbox \"990f938a7a36ca804c31aa980cef60928b1795c52b2b986aea696f14be94bd9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f796600c2fd3ce041ab3b4476aa5cf9210f737933a7eba0123710c087b7c2ff2\"" May 16 16:38:00.906489 containerd[1606]: time="2025-05-16T16:38:00.906468224Z" level=info msg="StartContainer for \"f796600c2fd3ce041ab3b4476aa5cf9210f737933a7eba0123710c087b7c2ff2\"" May 16 16:38:00.907589 containerd[1606]: time="2025-05-16T16:38:00.907569344Z" level=info msg="connecting to shim f796600c2fd3ce041ab3b4476aa5cf9210f737933a7eba0123710c087b7c2ff2" address="unix:///run/containerd/s/3e6d232e6759bff0ed65fffcbcbdf75009345b56996b42840e8daf6482cc5c6f" protocol=ttrpc version=3 May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.544 [INFO][4128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.587 [INFO][4128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0 calico-apiserver-84d465b4cc- calico-apiserver 92666134-ee96-4ab1-a528-86be174720fb 825 0 2025-05-16 16:37:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84d465b4cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84d465b4cc-jdqkb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8a8723c262 [] [] }} ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.587 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.699 [INFO][4232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" HandleID="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Workload="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.699 [INFO][4232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" HandleID="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Workload="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235e60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84d465b4cc-jdqkb", "timestamp":"2025-05-16 16:38:00.699006089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.699 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.784 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.784 [INFO][4232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.845 [INFO][4232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.861 [INFO][4232] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.866 [INFO][4232] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.868 [INFO][4232] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.871 [INFO][4232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.871 [INFO][4232] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.872 [INFO][4232] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207 May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.879 [INFO][4232] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.890 [INFO][4232] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.890 [INFO][4232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" host="localhost" May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.890 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:00.915254 containerd[1606]: 2025-05-16 16:38:00.890 [INFO][4232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" HandleID="k8s-pod-network.535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Workload="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" May 16 16:38:00.915845 containerd[1606]: 2025-05-16 16:38:00.894 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0", GenerateName:"calico-apiserver-84d465b4cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"92666134-ee96-4ab1-a528-86be174720fb", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d465b4cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84d465b4cc-jdqkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8a8723c262", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.915845 containerd[1606]: 2025-05-16 16:38:00.894 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" May 16 16:38:00.915845 containerd[1606]: 2025-05-16 16:38:00.894 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8a8723c262 ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" May 16 16:38:00.915845 containerd[1606]: 2025-05-16 16:38:00.899 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" May 16 16:38:00.915845 containerd[1606]: 2025-05-16 16:38:00.899 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0", GenerateName:"calico-apiserver-84d465b4cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"92666134-ee96-4ab1-a528-86be174720fb", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d465b4cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207", Pod:"calico-apiserver-84d465b4cc-jdqkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8a8723c262", MAC:"ca:2a:df:e5:85:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:00.915845 containerd[1606]: 2025-05-16 16:38:00.911 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" Namespace="calico-apiserver" Pod="calico-apiserver-84d465b4cc-jdqkb" WorkloadEndpoint="localhost-k8s-calico--apiserver--84d465b4cc--jdqkb-eth0" May 16 16:38:00.937487 systemd[1]: Started cri-containerd-f796600c2fd3ce041ab3b4476aa5cf9210f737933a7eba0123710c087b7c2ff2.scope - libcontainer container f796600c2fd3ce041ab3b4476aa5cf9210f737933a7eba0123710c087b7c2ff2. May 16 16:38:00.961322 containerd[1606]: time="2025-05-16T16:38:00.961212616Z" level=info msg="connecting to shim 535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207" address="unix:///run/containerd/s/4c23ef99bf7606bb35e6a6da72776a30235d7b2e696b56eb971b7010d1de142f" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:00.970053 containerd[1606]: time="2025-05-16T16:38:00.970014515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-5gtdb,Uid:9a58fb16-fb0c-422a-a589-02591230be6e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\"" May 16 16:38:00.974629 containerd[1606]: time="2025-05-16T16:38:00.974526068Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:00.977634 containerd[1606]: time="2025-05-16T16:38:00.977598606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 16:38:00.984931 containerd[1606]: time="2025-05-16T16:38:00.984875678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:00.986297 kubelet[2712]: E0516 16:38:00.985085 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:38:00.986297 kubelet[2712]: E0516 16:38:00.985144 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:38:00.986428 containerd[1606]: time="2025-05-16T16:38:00.985567880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 16:38:00.986833 kubelet[2712]: E0516 16:38:00.986758 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4eda0e24a84d1ba6397664228d76dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kf26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5784d44d8b-w7g52_calico-system(d1830161-185c-4edf-ba88-ac20dff9bb5d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:01.004354 containerd[1606]: time="2025-05-16T16:38:01.003927308Z" level=info msg="StartContainer for \"f796600c2fd3ce041ab3b4476aa5cf9210f737933a7eba0123710c087b7c2ff2\" returns successfully" May 16 16:38:01.006436 systemd[1]: Started cri-containerd-535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207.scope - libcontainer container 535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207. May 16 16:38:01.014890 systemd-networkd[1494]: cali3ee3196948f: Link UP May 16 16:38:01.015987 systemd-networkd[1494]: cali3ee3196948f: Gained carrier May 16 16:38:01.023582 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.634 [INFO][4192] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.646 [INFO][4192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0 goldmane-8f77d7b6c- calico-system bf47523c-1e81-4bbe-a80b-55b0036c2140 828 0 2025-05-16 16:37:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-bv4q6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3ee3196948f [] [] }} ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.646 [INFO][4192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.714 [INFO][4269] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" HandleID="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Workload="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.715 [INFO][4269] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" HandleID="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Workload="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052cc40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-bv4q6", "timestamp":"2025-05-16 16:38:00.714805746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.716 [INFO][4269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.890 [INFO][4269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.890 [INFO][4269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.946 [INFO][4269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.964 [INFO][4269] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.972 [INFO][4269] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.974 [INFO][4269] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.977 [INFO][4269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.977 [INFO][4269] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.982 [INFO][4269] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:00.991 [INFO][4269] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:01.002 [INFO][4269] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:01.002 [INFO][4269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" host="localhost" May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:01.003 [INFO][4269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:01.029479 containerd[1606]: 2025-05-16 16:38:01.003 [INFO][4269] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" HandleID="k8s-pod-network.952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Workload="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" May 16 16:38:01.030038 containerd[1606]: 2025-05-16 16:38:01.009 [INFO][4192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"bf47523c-1e81-4bbe-a80b-55b0036c2140", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-bv4q6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ee3196948f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:01.030038 containerd[1606]: 2025-05-16 16:38:01.009 [INFO][4192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" May 16 16:38:01.030038 containerd[1606]: 2025-05-16 16:38:01.009 [INFO][4192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ee3196948f ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" May 16 16:38:01.030038 containerd[1606]: 2025-05-16 16:38:01.016 [INFO][4192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" May 16 16:38:01.030038 containerd[1606]: 2025-05-16 16:38:01.017 [INFO][4192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"bf47523c-1e81-4bbe-a80b-55b0036c2140", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a", Pod:"goldmane-8f77d7b6c-bv4q6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ee3196948f", MAC:"e2:42:eb:63:1a:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:01.030038 containerd[1606]: 2025-05-16 16:38:01.025 [INFO][4192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" Namespace="calico-system" Pod="goldmane-8f77d7b6c-bv4q6" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--bv4q6-eth0" May 16 16:38:01.064376 containerd[1606]: time="2025-05-16T16:38:01.063306297Z" level=info msg="connecting to shim 952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a" address="unix:///run/containerd/s/d1c47efc9f6888392bc6c3bbd2e3f536bee6e29926404c8fc02d2a6f7bec61f9" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:01.080749 containerd[1606]: time="2025-05-16T16:38:01.080634854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d465b4cc-jdqkb,Uid:92666134-ee96-4ab1-a528-86be174720fb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207\"" May 16 16:38:01.099702 systemd[1]: Started cri-containerd-952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a.scope - libcontainer container 952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a. May 16 16:38:01.119086 systemd-networkd[1494]: vxlan.calico: Link UP May 16 16:38:01.119103 systemd-networkd[1494]: vxlan.calico: Gained carrier May 16 16:38:01.120495 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:01.158770 containerd[1606]: time="2025-05-16T16:38:01.158664345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-bv4q6,Uid:bf47523c-1e81-4bbe-a80b-55b0036c2140,Namespace:calico-system,Attempt:0,} returns sandbox id \"952bd23ce4205b6af5f4099515446cf51710f10e9af044ee9248b8ab4715b83a\"" May 16 16:38:01.305755 systemd[1]: Started sshd@8-10.0.0.37:22-10.0.0.1:40796.service - OpenSSH per-connection server daemon (10.0.0.1:40796). May 16 16:38:01.361488 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 40796 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:01.363463 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:01.367987 systemd-logind[1582]: New session 9 of user core. May 16 16:38:01.372490 containerd[1606]: time="2025-05-16T16:38:01.372444428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cf5pq,Uid:4a53a0ed-691f-460a-8f34-788759fa4d73,Namespace:calico-system,Attempt:0,}" May 16 16:38:01.374650 kubelet[2712]: I0516 16:38:01.374357 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95434f7-5cb4-461e-b037-cf900bb483de" path="/var/lib/kubelet/pods/b95434f7-5cb4-461e-b037-cf900bb483de/volumes" May 16 16:38:01.374429 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 16:38:01.502044 systemd-networkd[1494]: calie1fe364f534: Link UP May 16 16:38:01.502240 systemd-networkd[1494]: calie1fe364f534: Gained carrier May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.419 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cf5pq-eth0 csi-node-driver- calico-system 4a53a0ed-691f-460a-8f34-788759fa4d73 709 0 2025-05-16 16:37:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cf5pq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie1fe364f534 [] [] }} ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.420 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-eth0" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.465 [INFO][4640] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" HandleID="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Workload="localhost-k8s-csi--node--driver--cf5pq-eth0" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.465 [INFO][4640] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" HandleID="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Workload="localhost-k8s-csi--node--driver--cf5pq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000480540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cf5pq", "timestamp":"2025-05-16 16:38:01.465110486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.465 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.465 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.465 [INFO][4640] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.471 [INFO][4640] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.475 [INFO][4640] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.478 [INFO][4640] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.480 [INFO][4640] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.482 [INFO][4640] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.482 [INFO][4640] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.483 [INFO][4640] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4 May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.487 [INFO][4640] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.494 [INFO][4640] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.494 [INFO][4640] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" host="localhost" May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.494 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:01.518129 containerd[1606]: 2025-05-16 16:38:01.494 [INFO][4640] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" HandleID="k8s-pod-network.7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Workload="localhost-k8s-csi--node--driver--cf5pq-eth0" May 16 16:38:01.519066 containerd[1606]: 2025-05-16 16:38:01.498 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cf5pq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a53a0ed-691f-460a-8f34-788759fa4d73", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cf5pq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1fe364f534", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:01.519066 containerd[1606]: 2025-05-16 16:38:01.498 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-eth0" May 16 16:38:01.519066 containerd[1606]: 2025-05-16 16:38:01.498 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1fe364f534 ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-eth0" May 16 16:38:01.519066 containerd[1606]: 2025-05-16 16:38:01.501 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-eth0" May 16 16:38:01.519066 containerd[1606]: 2025-05-16 16:38:01.501 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cf5pq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4a53a0ed-691f-460a-8f34-788759fa4d73", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4", Pod:"csi-node-driver-cf5pq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie1fe364f534", MAC:"1e:0c:28:e2:18:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:01.519066 containerd[1606]: 2025-05-16 16:38:01.513 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" Namespace="calico-system" Pod="csi-node-driver-cf5pq" WorkloadEndpoint="localhost-k8s-csi--node--driver--cf5pq-eth0" May 16 16:38:01.529399 sshd[4598]: Connection closed by 10.0.0.1 port 40796 May 16 16:38:01.529761 sshd-session[4585]: pam_unix(sshd:session): session closed for user core May 16 16:38:01.535367 systemd[1]: sshd@8-10.0.0.37:22-10.0.0.1:40796.service: Deactivated successfully. May 16 16:38:01.537517 systemd[1]: session-9.scope: Deactivated successfully. May 16 16:38:01.538457 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. May 16 16:38:01.539749 systemd-logind[1582]: Removed session 9. May 16 16:38:01.546633 containerd[1606]: time="2025-05-16T16:38:01.546589501Z" level=info msg="connecting to shim 7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4" address="unix:///run/containerd/s/68c43b8b9e0d4a8d426f19d18b6f3478f5c1e59a91ac90f9dd693f1310565587" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:01.572415 systemd[1]: Started cri-containerd-7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4.scope - libcontainer container 7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4. May 16 16:38:01.584735 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:01.603999 containerd[1606]: time="2025-05-16T16:38:01.603936880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cf5pq,Uid:4a53a0ed-691f-460a-8f34-788759fa4d73,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4\"" May 16 16:38:01.726354 kubelet[2712]: E0516 16:38:01.726193 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:01.738768 kubelet[2712]: I0516 16:38:01.738707 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mvzgl" podStartSLOduration=38.738689447 podStartE2EDuration="38.738689447s" podCreationTimestamp="2025-05-16 16:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:38:01.737645364 +0000 UTC m=+44.456538175" watchObservedRunningTime="2025-05-16 16:38:01.738689447 +0000 UTC m=+44.457582248" May 16 16:38:01.890476 systemd-networkd[1494]: cali9dbf5211bc4: Gained IPv6LL May 16 16:38:01.890821 systemd-networkd[1494]: cali0bdb184a241: Gained IPv6LL May 16 16:38:02.274440 systemd-networkd[1494]: calic93e18abd26: Gained IPv6LL May 16 16:38:02.594502 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL May 16 16:38:02.732195 kubelet[2712]: E0516 16:38:02.732144 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:02.786528 systemd-networkd[1494]: calif8a8723c262: Gained IPv6LL May 16 16:38:02.850466 systemd-networkd[1494]: cali3ee3196948f: Gained IPv6LL May 16 16:38:03.426445 systemd-networkd[1494]: calie1fe364f534: Gained IPv6LL May 16 16:38:03.739858 kubelet[2712]: E0516 16:38:03.739060 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:04.274173 containerd[1606]: time="2025-05-16T16:38:04.274120989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:04.274894 containerd[1606]: time="2025-05-16T16:38:04.274836955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 16 16:38:04.276038 containerd[1606]: time="2025-05-16T16:38:04.275994991Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:04.277934 containerd[1606]: time="2025-05-16T16:38:04.277889171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:04.278418 containerd[1606]: time="2025-05-16T16:38:04.278386064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 3.292787036s" May 16 16:38:04.278418 containerd[1606]: time="2025-05-16T16:38:04.278416301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 16:38:04.279564 containerd[1606]: time="2025-05-16T16:38:04.279537618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 16:38:04.280442 containerd[1606]: time="2025-05-16T16:38:04.280402994Z" level=info msg="CreateContainer within sandbox \"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 16:38:04.288146 containerd[1606]: time="2025-05-16T16:38:04.288111139Z" level=info msg="Container 3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:04.295232 containerd[1606]: time="2025-05-16T16:38:04.295183209Z" level=info msg="CreateContainer within sandbox \"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\"" May 16 16:38:04.295806 containerd[1606]: time="2025-05-16T16:38:04.295661949Z" level=info msg="StartContainer for \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\"" May 16 16:38:04.296652 containerd[1606]: time="2025-05-16T16:38:04.296625520Z" level=info msg="connecting to shim 3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a" address="unix:///run/containerd/s/c0486373826b8792950524fbbed53b16d4f3e21ed8db782460fc1112dc77b590" protocol=ttrpc version=3 May 16 16:38:04.319405 systemd[1]: Started cri-containerd-3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a.scope - libcontainer container 3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a. May 16 16:38:04.365834 containerd[1606]: time="2025-05-16T16:38:04.365799120Z" level=info msg="StartContainer for \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" returns successfully" May 16 16:38:04.640569 containerd[1606]: time="2025-05-16T16:38:04.640514846Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:04.667102 containerd[1606]: time="2025-05-16T16:38:04.666934904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 16:38:04.667102 containerd[1606]: time="2025-05-16T16:38:04.666984737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:04.667268 kubelet[2712]: E0516 16:38:04.667211 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:38:04.667268 kubelet[2712]: E0516 16:38:04.667266 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:38:04.667782 containerd[1606]: time="2025-05-16T16:38:04.667719158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 16:38:04.667837 kubelet[2712]: E0516 16:38:04.667638 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kf26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5784d44d8b-w7g52_calico-system(d1830161-185c-4edf-ba88-ac20dff9bb5d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:04.669499 kubelet[2712]: E0516 16:38:04.669354 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5784d44d8b-w7g52" podUID="d1830161-185c-4edf-ba88-ac20dff9bb5d" May 16 16:38:04.740975 kubelet[2712]: E0516 16:38:04.740855 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5784d44d8b-w7g52" podUID="d1830161-185c-4edf-ba88-ac20dff9bb5d" May 16 16:38:05.138113 containerd[1606]: time="2025-05-16T16:38:05.138043420Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:05.138993 containerd[1606]: time="2025-05-16T16:38:05.138959221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 16 16:38:05.140457 containerd[1606]: time="2025-05-16T16:38:05.140413142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 472.673425ms" May 16 16:38:05.140457 containerd[1606]: time="2025-05-16T16:38:05.140455773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 16:38:05.141704 containerd[1606]: time="2025-05-16T16:38:05.141457805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 16:38:05.144503 containerd[1606]: time="2025-05-16T16:38:05.144453003Z" level=info msg="CreateContainer within sandbox \"535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 16:38:05.156774 containerd[1606]: time="2025-05-16T16:38:05.156710866Z" level=info msg="Container 7e99485fb8e49919a04dee56e8d551c93a41294636fdb220d31c162ef91c1bd9: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:05.169668 containerd[1606]: time="2025-05-16T16:38:05.169609842Z" level=info msg="CreateContainer within sandbox \"535d67776ca307f38155e6829e12d1e3ad5f0d826ea836b219dc1664db2eb207\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e99485fb8e49919a04dee56e8d551c93a41294636fdb220d31c162ef91c1bd9\"" May 16 16:38:05.171702 containerd[1606]: time="2025-05-16T16:38:05.170398875Z" level=info msg="StartContainer for \"7e99485fb8e49919a04dee56e8d551c93a41294636fdb220d31c162ef91c1bd9\"" May 16 16:38:05.171702 containerd[1606]: time="2025-05-16T16:38:05.171367956Z" level=info msg="connecting to shim 7e99485fb8e49919a04dee56e8d551c93a41294636fdb220d31c162ef91c1bd9" address="unix:///run/containerd/s/4c23ef99bf7606bb35e6a6da72776a30235d7b2e696b56eb971b7010d1de142f" protocol=ttrpc version=3 May 16 16:38:05.195796 kubelet[2712]: I0516 16:38:05.195725 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f85b45fd-5gtdb" podStartSLOduration=30.887536218 podStartE2EDuration="34.195257363s" podCreationTimestamp="2025-05-16 16:37:31 +0000 UTC" firstStartedPulling="2025-05-16 16:38:00.971570551 +0000 UTC m=+43.690463362" lastFinishedPulling="2025-05-16 16:38:04.279291686 +0000 UTC m=+46.998184507" observedRunningTime="2025-05-16 16:38:04.757788526 +0000 UTC m=+47.476681337" watchObservedRunningTime="2025-05-16 16:38:05.195257363 +0000 UTC m=+47.914150165" May 16 16:38:05.199505 systemd[1]: Started cri-containerd-7e99485fb8e49919a04dee56e8d551c93a41294636fdb220d31c162ef91c1bd9.scope - libcontainer container 7e99485fb8e49919a04dee56e8d551c93a41294636fdb220d31c162ef91c1bd9. May 16 16:38:05.253530 containerd[1606]: time="2025-05-16T16:38:05.253486544Z" level=info msg="StartContainer for \"7e99485fb8e49919a04dee56e8d551c93a41294636fdb220d31c162ef91c1bd9\" returns successfully" May 16 16:38:05.411987 containerd[1606]: time="2025-05-16T16:38:05.411823226Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:05.413227 containerd[1606]: time="2025-05-16T16:38:05.413094655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:05.413227 containerd[1606]: time="2025-05-16T16:38:05.413173463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 16:38:05.413551 kubelet[2712]: E0516 16:38:05.413505 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:38:05.413628 kubelet[2712]: E0516 16:38:05.413560 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:38:05.414065 kubelet[2712]: E0516 16:38:05.413796 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hrrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-bv4q6_calico-system(bf47523c-1e81-4bbe-a80b-55b0036c2140): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:05.414201 containerd[1606]: time="2025-05-16T16:38:05.413851828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 16:38:05.415694 kubelet[2712]: E0516 16:38:05.415524 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-bv4q6" podUID="bf47523c-1e81-4bbe-a80b-55b0036c2140" May 16 16:38:05.742779 kubelet[2712]: E0516 16:38:05.742345 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bv4q6" podUID="bf47523c-1e81-4bbe-a80b-55b0036c2140" May 16 16:38:05.773712 kubelet[2712]: I0516 16:38:05.773627 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84d465b4cc-jdqkb" podStartSLOduration=30.714799018 podStartE2EDuration="34.773336041s" podCreationTimestamp="2025-05-16 16:37:31 +0000 UTC" firstStartedPulling="2025-05-16 16:38:01.082735103 +0000 UTC m=+43.801627914" lastFinishedPulling="2025-05-16 16:38:05.141272126 +0000 UTC m=+47.860164937" observedRunningTime="2025-05-16 16:38:05.772062748 +0000 UTC m=+48.490955569" watchObservedRunningTime="2025-05-16 16:38:05.773336041 +0000 UTC m=+48.492228852" May 16 16:38:06.544809 systemd[1]: Started sshd@9-10.0.0.37:22-10.0.0.1:33128.service - OpenSSH per-connection server daemon (10.0.0.1:33128). May 16 16:38:06.601819 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 33128 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:06.604018 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:06.610640 systemd-logind[1582]: New session 10 of user core. May 16 16:38:06.616480 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 16:38:06.745308 kubelet[2712]: I0516 16:38:06.745226 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:38:06.767759 sshd[4820]: Connection closed by 10.0.0.1 port 33128 May 16 16:38:06.768257 sshd-session[4817]: pam_unix(sshd:session): session closed for user core May 16 16:38:06.774553 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. May 16 16:38:06.774927 systemd[1]: sshd@9-10.0.0.37:22-10.0.0.1:33128.service: Deactivated successfully. May 16 16:38:06.777537 systemd[1]: session-10.scope: Deactivated successfully. May 16 16:38:06.780391 systemd-logind[1582]: Removed session 10. May 16 16:38:07.736554 containerd[1606]: time="2025-05-16T16:38:07.736470626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:07.737609 containerd[1606]: time="2025-05-16T16:38:07.737585951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 16 16:38:07.738958 containerd[1606]: time="2025-05-16T16:38:07.738914767Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:07.741712 containerd[1606]: time="2025-05-16T16:38:07.741672437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:07.742723 containerd[1606]: time="2025-05-16T16:38:07.742662647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.328776515s" May 16 16:38:07.742723 containerd[1606]: time="2025-05-16T16:38:07.742710276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 16 16:38:07.744817 containerd[1606]: time="2025-05-16T16:38:07.744768273Z" level=info msg="CreateContainer within sandbox \"7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 16:38:07.772801 containerd[1606]: time="2025-05-16T16:38:07.772760679Z" level=info msg="Container 5ccaffc4f61e9d55c2dd9d9c4934c6f83caf242518dee0f4131c8a8186c02dc8: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:07.792657 containerd[1606]: time="2025-05-16T16:38:07.792585011Z" level=info msg="CreateContainer within sandbox \"7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5ccaffc4f61e9d55c2dd9d9c4934c6f83caf242518dee0f4131c8a8186c02dc8\"" May 16 16:38:07.793399 containerd[1606]: time="2025-05-16T16:38:07.793333517Z" level=info msg="StartContainer for \"5ccaffc4f61e9d55c2dd9d9c4934c6f83caf242518dee0f4131c8a8186c02dc8\"" May 16 16:38:07.795348 containerd[1606]: time="2025-05-16T16:38:07.795306554Z" level=info msg="connecting to shim 5ccaffc4f61e9d55c2dd9d9c4934c6f83caf242518dee0f4131c8a8186c02dc8" address="unix:///run/containerd/s/68c43b8b9e0d4a8d426f19d18b6f3478f5c1e59a91ac90f9dd693f1310565587" protocol=ttrpc version=3 May 16 16:38:07.817453 systemd[1]: Started cri-containerd-5ccaffc4f61e9d55c2dd9d9c4934c6f83caf242518dee0f4131c8a8186c02dc8.scope - libcontainer container 5ccaffc4f61e9d55c2dd9d9c4934c6f83caf242518dee0f4131c8a8186c02dc8. May 16 16:38:07.864638 containerd[1606]: time="2025-05-16T16:38:07.864588275Z" level=info msg="StartContainer for \"5ccaffc4f61e9d55c2dd9d9c4934c6f83caf242518dee0f4131c8a8186c02dc8\" returns successfully" May 16 16:38:07.869135 containerd[1606]: time="2025-05-16T16:38:07.869010413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 16:38:10.006291 containerd[1606]: time="2025-05-16T16:38:10.006233295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:10.054206 containerd[1606]: time="2025-05-16T16:38:10.051323028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 16 16:38:10.089388 containerd[1606]: time="2025-05-16T16:38:10.089340107Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:10.134150 containerd[1606]: time="2025-05-16T16:38:10.134097234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:10.134799 containerd[1606]: time="2025-05-16T16:38:10.134767602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.265621816s" May 16 16:38:10.134869 containerd[1606]: time="2025-05-16T16:38:10.134801587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 16 16:38:10.136802 containerd[1606]: time="2025-05-16T16:38:10.136775133Z" level=info msg="CreateContainer within sandbox \"7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 16:38:10.253687 containerd[1606]: time="2025-05-16T16:38:10.253628152Z" level=info msg="Container 1eb53e3957233d01435618ac51bf840e38c63d6fd743a28ee131ec0cf025a2b6: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:10.263230 containerd[1606]: time="2025-05-16T16:38:10.263133904Z" level=info msg="CreateContainer within sandbox \"7b36e00072d62782364b7a6154a14264e821c4982c67053fe043f7dc6dbeb3d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1eb53e3957233d01435618ac51bf840e38c63d6fd743a28ee131ec0cf025a2b6\"" May 16 16:38:10.263729 containerd[1606]: time="2025-05-16T16:38:10.263696111Z" level=info msg="StartContainer for \"1eb53e3957233d01435618ac51bf840e38c63d6fd743a28ee131ec0cf025a2b6\"" May 16 16:38:10.265080 containerd[1606]: time="2025-05-16T16:38:10.265047929Z" level=info msg="connecting to shim 1eb53e3957233d01435618ac51bf840e38c63d6fd743a28ee131ec0cf025a2b6" address="unix:///run/containerd/s/68c43b8b9e0d4a8d426f19d18b6f3478f5c1e59a91ac90f9dd693f1310565587" protocol=ttrpc version=3 May 16 16:38:10.290476 systemd[1]: Started cri-containerd-1eb53e3957233d01435618ac51bf840e38c63d6fd743a28ee131ec0cf025a2b6.scope - libcontainer container 1eb53e3957233d01435618ac51bf840e38c63d6fd743a28ee131ec0cf025a2b6. May 16 16:38:10.351534 containerd[1606]: time="2025-05-16T16:38:10.351465658Z" level=info msg="StartContainer for \"1eb53e3957233d01435618ac51bf840e38c63d6fd743a28ee131ec0cf025a2b6\" returns successfully" May 16 16:38:10.695346 kubelet[2712]: I0516 16:38:10.695305 2712 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 16:38:10.695346 kubelet[2712]: I0516 16:38:10.695350 2712 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 16:38:10.767178 kubelet[2712]: I0516 16:38:10.766424 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cf5pq" podStartSLOduration=28.236259101999998 podStartE2EDuration="36.76640167s" podCreationTimestamp="2025-05-16 16:37:34 +0000 UTC" firstStartedPulling="2025-05-16 16:38:01.605352942 +0000 UTC m=+44.324245753" lastFinishedPulling="2025-05-16 16:38:10.13549551 +0000 UTC m=+52.854388321" observedRunningTime="2025-05-16 16:38:10.766242392 +0000 UTC m=+53.485135193" watchObservedRunningTime="2025-05-16 16:38:10.76640167 +0000 UTC m=+53.485294482" May 16 16:38:11.371994 kubelet[2712]: E0516 16:38:11.371649 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:11.372222 containerd[1606]: time="2025-05-16T16:38:11.372149738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5487f9d78-q6gxw,Uid:6af1b68e-dfdc-4de3-8bba-b9af7ed32d69,Namespace:calico-system,Attempt:0,}" May 16 16:38:11.372992 containerd[1606]: time="2025-05-16T16:38:11.372248874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-cdvx9,Uid:4935dd62-d692-4f3f-b085-7e695611704c,Namespace:calico-apiserver,Attempt:0,}" May 16 16:38:11.372992 containerd[1606]: time="2025-05-16T16:38:11.372150690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tj5l7,Uid:a89ec786-83ae-4d2c-a9ed-aae32acf5fad,Namespace:kube-system,Attempt:0,}" May 16 16:38:11.507070 systemd-networkd[1494]: cali5cdf53eef61: Link UP May 16 16:38:11.507870 systemd-networkd[1494]: cali5cdf53eef61: Gained carrier May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.426 [INFO][4920] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0 coredns-7c65d6cfc9- kube-system a89ec786-83ae-4d2c-a9ed-aae32acf5fad 836 0 2025-05-16 16:37:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tj5l7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5cdf53eef61 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.426 [INFO][4920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.468 [INFO][4952] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" HandleID="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Workload="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.468 [INFO][4952] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" HandleID="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Workload="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tj5l7", "timestamp":"2025-05-16 16:38:11.468500799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.468 [INFO][4952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.469 [INFO][4952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.469 [INFO][4952] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.478 [INFO][4952] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.483 [INFO][4952] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.486 [INFO][4952] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.488 [INFO][4952] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.490 [INFO][4952] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.490 [INFO][4952] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.491 [INFO][4952] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6 May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.495 [INFO][4952] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.499 [INFO][4952] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.499 [INFO][4952] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" host="localhost" May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.499 [INFO][4952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:11.521897 containerd[1606]: 2025-05-16 16:38:11.499 [INFO][4952] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" HandleID="k8s-pod-network.bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Workload="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" May 16 16:38:11.522591 containerd[1606]: 2025-05-16 16:38:11.504 [INFO][4920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a89ec786-83ae-4d2c-a9ed-aae32acf5fad", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tj5l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cdf53eef61", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:11.522591 containerd[1606]: 2025-05-16 16:38:11.504 [INFO][4920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" May 16 16:38:11.522591 containerd[1606]: 2025-05-16 16:38:11.504 [INFO][4920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cdf53eef61 ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" May 16 16:38:11.522591 containerd[1606]: 2025-05-16 16:38:11.508 [INFO][4920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" May 16 16:38:11.522591 containerd[1606]: 2025-05-16 16:38:11.508 [INFO][4920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a89ec786-83ae-4d2c-a9ed-aae32acf5fad", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6", Pod:"coredns-7c65d6cfc9-tj5l7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cdf53eef61", MAC:"12:e9:25:ba:0f:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:11.522591 containerd[1606]: 2025-05-16 16:38:11.516 [INFO][4920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tj5l7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tj5l7-eth0" May 16 16:38:11.560939 containerd[1606]: time="2025-05-16T16:38:11.560879179Z" level=info msg="connecting to shim bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6" address="unix:///run/containerd/s/a459fb0c7f41fe0c25710e9d7007a89fce5a1f3a483414e158d45ecac2b62b87" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:11.588563 systemd[1]: Started cri-containerd-bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6.scope - libcontainer container bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6. May 16 16:38:11.605023 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:11.618776 systemd-networkd[1494]: calib2265863afa: Link UP May 16 16:38:11.621900 systemd-networkd[1494]: calib2265863afa: Gained carrier May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.424 [INFO][4909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0 calico-apiserver-5f85b45fd- calico-apiserver 4935dd62-d692-4f3f-b085-7e695611704c 829 0 2025-05-16 16:37:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f85b45fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f85b45fd-cdvx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2265863afa [] [] }} ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.425 [INFO][4909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.469 [INFO][4955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" HandleID="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Workload="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.469 [INFO][4955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" HandleID="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Workload="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000585660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f85b45fd-cdvx9", "timestamp":"2025-05-16 16:38:11.46900203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.469 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.499 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.499 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.579 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.584 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.588 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.590 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.593 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.593 [INFO][4955] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.594 [INFO][4955] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9 May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.598 [INFO][4955] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.604 [INFO][4955] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.604 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" host="localhost" May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.604 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:11.633251 containerd[1606]: 2025-05-16 16:38:11.604 [INFO][4955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" HandleID="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Workload="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:11.634656 containerd[1606]: 2025-05-16 16:38:11.609 [INFO][4909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0", GenerateName:"calico-apiserver-5f85b45fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4935dd62-d692-4f3f-b085-7e695611704c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f85b45fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f85b45fd-cdvx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2265863afa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:11.634656 containerd[1606]: 2025-05-16 16:38:11.609 [INFO][4909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:11.634656 containerd[1606]: 2025-05-16 16:38:11.610 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2265863afa ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:11.634656 containerd[1606]: 2025-05-16 16:38:11.619 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:11.634656 containerd[1606]: 2025-05-16 16:38:11.620 [INFO][4909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0", GenerateName:"calico-apiserver-5f85b45fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4935dd62-d692-4f3f-b085-7e695611704c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f85b45fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9", Pod:"calico-apiserver-5f85b45fd-cdvx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2265863afa", MAC:"92:f8:4c:2b:f4:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:11.634656 containerd[1606]: 2025-05-16 16:38:11.629 [INFO][4909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Namespace="calico-apiserver" Pod="calico-apiserver-5f85b45fd-cdvx9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:11.647769 containerd[1606]: time="2025-05-16T16:38:11.647592691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tj5l7,Uid:a89ec786-83ae-4d2c-a9ed-aae32acf5fad,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6\"" May 16 16:38:11.648884 kubelet[2712]: E0516 16:38:11.648841 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:11.659917 containerd[1606]: time="2025-05-16T16:38:11.658793586Z" level=info msg="CreateContainer within sandbox \"bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:38:11.661312 containerd[1606]: time="2025-05-16T16:38:11.660973389Z" level=info msg="connecting to shim a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" address="unix:///run/containerd/s/d354b8cc543c0a812ce8ed7144d131d05ad0600bde20d720c559cfd58f94f960" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:11.676782 containerd[1606]: time="2025-05-16T16:38:11.676726652Z" level=info msg="Container 66c1d9a3b17e43d5204e9e7029df699c1c66d2b75134fe3524094d6fb4e548df: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:11.687020 containerd[1606]: time="2025-05-16T16:38:11.686970389Z" level=info msg="CreateContainer within sandbox \"bdfae7c95d3e073a22efc9fd9da835dbf4c3b36239b7ab803ac801d18dd77cc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66c1d9a3b17e43d5204e9e7029df699c1c66d2b75134fe3524094d6fb4e548df\"" May 16 16:38:11.690200 containerd[1606]: time="2025-05-16T16:38:11.689476015Z" level=info msg="StartContainer for \"66c1d9a3b17e43d5204e9e7029df699c1c66d2b75134fe3524094d6fb4e548df\"" May 16 16:38:11.693859 containerd[1606]: time="2025-05-16T16:38:11.693821014Z" level=info msg="connecting to shim 66c1d9a3b17e43d5204e9e7029df699c1c66d2b75134fe3524094d6fb4e548df" address="unix:///run/containerd/s/a459fb0c7f41fe0c25710e9d7007a89fce5a1f3a483414e158d45ecac2b62b87" protocol=ttrpc version=3 May 16 16:38:11.697877 systemd[1]: Started cri-containerd-a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9.scope - libcontainer container a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9. May 16 16:38:11.723691 systemd[1]: Started cri-containerd-66c1d9a3b17e43d5204e9e7029df699c1c66d2b75134fe3524094d6fb4e548df.scope - libcontainer container 66c1d9a3b17e43d5204e9e7029df699c1c66d2b75134fe3524094d6fb4e548df. May 16 16:38:11.727963 systemd-networkd[1494]: cali14e6b087c92: Link UP May 16 16:38:11.730844 systemd-networkd[1494]: cali14e6b087c92: Gained carrier May 16 16:38:11.740905 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.429 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0 calico-kube-controllers-5487f9d78- calico-system 6af1b68e-dfdc-4de3-8bba-b9af7ed32d69 821 0 2025-05-16 16:37:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5487f9d78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5487f9d78-q6gxw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali14e6b087c92 [] [] }} ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.429 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.471 [INFO][4967] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" HandleID="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Workload="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.472 [INFO][4967] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" HandleID="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Workload="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b51f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5487f9d78-q6gxw", "timestamp":"2025-05-16 16:38:11.471188807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.472 [INFO][4967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.605 [INFO][4967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.605 [INFO][4967] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.680 [INFO][4967] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.689 [INFO][4967] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.693 [INFO][4967] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.695 [INFO][4967] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.698 [INFO][4967] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.698 [INFO][4967] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.700 [INFO][4967] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526 May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.706 [INFO][4967] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.717 [INFO][4967] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.717 [INFO][4967] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" host="localhost" May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.718 [INFO][4967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:11.753067 containerd[1606]: 2025-05-16 16:38:11.718 [INFO][4967] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" HandleID="k8s-pod-network.ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Workload="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" May 16 16:38:11.753728 containerd[1606]: 2025-05-16 16:38:11.722 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0", GenerateName:"calico-kube-controllers-5487f9d78-", Namespace:"calico-system", SelfLink:"", UID:"6af1b68e-dfdc-4de3-8bba-b9af7ed32d69", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5487f9d78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5487f9d78-q6gxw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali14e6b087c92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:11.753728 containerd[1606]: 2025-05-16 16:38:11.722 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" May 16 16:38:11.753728 containerd[1606]: 2025-05-16 16:38:11.722 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14e6b087c92 ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" May 16 16:38:11.753728 containerd[1606]: 2025-05-16 16:38:11.730 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" May 16 16:38:11.753728 containerd[1606]: 2025-05-16 16:38:11.732 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0", GenerateName:"calico-kube-controllers-5487f9d78-", Namespace:"calico-system", SelfLink:"", UID:"6af1b68e-dfdc-4de3-8bba-b9af7ed32d69", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 37, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5487f9d78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526", Pod:"calico-kube-controllers-5487f9d78-q6gxw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali14e6b087c92", MAC:"ee:c1:59:bf:6c:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:38:11.753728 containerd[1606]: 2025-05-16 16:38:11.744 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" Namespace="calico-system" Pod="calico-kube-controllers-5487f9d78-q6gxw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5487f9d78--q6gxw-eth0" May 16 16:38:11.781192 systemd[1]: Started sshd@10-10.0.0.37:22-10.0.0.1:33140.service - OpenSSH per-connection server daemon (10.0.0.1:33140). May 16 16:38:11.801821 containerd[1606]: time="2025-05-16T16:38:11.801774023Z" level=info msg="connecting to shim ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526" address="unix:///run/containerd/s/e7bae3ee05ba18d6661108603627e3fd173af3757a3445b7978eb6858114457d" namespace=k8s.io protocol=ttrpc version=3 May 16 16:38:11.804388 containerd[1606]: time="2025-05-16T16:38:11.804357364Z" level=info msg="StartContainer for \"66c1d9a3b17e43d5204e9e7029df699c1c66d2b75134fe3524094d6fb4e548df\" returns successfully" May 16 16:38:11.809272 containerd[1606]: time="2025-05-16T16:38:11.809234323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f85b45fd-cdvx9,Uid:4935dd62-d692-4f3f-b085-7e695611704c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\"" May 16 16:38:11.813837 containerd[1606]: time="2025-05-16T16:38:11.813679721Z" level=info msg="CreateContainer within sandbox \"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 16:38:11.827215 containerd[1606]: time="2025-05-16T16:38:11.827168192Z" level=info msg="Container 9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:11.847649 systemd[1]: Started cri-containerd-ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526.scope - libcontainer container ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526. May 16 16:38:11.848320 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 33140 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:11.848892 containerd[1606]: time="2025-05-16T16:38:11.846205663Z" level=info msg="CreateContainer within sandbox \"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\"" May 16 16:38:11.849982 containerd[1606]: time="2025-05-16T16:38:11.849942650Z" level=info msg="StartContainer for \"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\"" May 16 16:38:11.850975 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:11.851227 containerd[1606]: time="2025-05-16T16:38:11.851164234Z" level=info msg="connecting to shim 9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335" address="unix:///run/containerd/s/d354b8cc543c0a812ce8ed7144d131d05ad0600bde20d720c559cfd58f94f960" protocol=ttrpc version=3 May 16 16:38:11.857797 systemd-logind[1582]: New session 11 of user core. May 16 16:38:11.859001 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 16:38:11.867807 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:38:11.880348 systemd[1]: Started cri-containerd-9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335.scope - libcontainer container 9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335. May 16 16:38:11.882863 containerd[1606]: time="2025-05-16T16:38:11.882814822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3\" id:\"5d6c44261b6e762dd5d6cfd418d6b2c1fa87831f58e1386966344502ba27f76c\" pid:5127 exited_at:{seconds:1747413491 nanos:882459063}" May 16 16:38:12.002897 containerd[1606]: time="2025-05-16T16:38:12.002761623Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3\" id:\"f44e31351e57e2e5aa08b48cc3752165f4121c1115f439c10311b4f2c61125cf\" pid:5246 exited_at:{seconds:1747413492 nanos:2262746}" May 16 16:38:12.097256 containerd[1606]: time="2025-05-16T16:38:12.097196282Z" level=info msg="StartContainer for \"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" returns successfully" May 16 16:38:12.097943 containerd[1606]: time="2025-05-16T16:38:12.097875678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5487f9d78-q6gxw,Uid:6af1b68e-dfdc-4de3-8bba-b9af7ed32d69,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526\"" May 16 16:38:12.100364 containerd[1606]: time="2025-05-16T16:38:12.100329004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 16:38:12.108547 sshd[5213]: Connection closed by 10.0.0.1 port 33140 May 16 16:38:12.109215 sshd-session[5143]: pam_unix(sshd:session): session closed for user core May 16 16:38:12.119927 systemd[1]: sshd@10-10.0.0.37:22-10.0.0.1:33140.service: Deactivated successfully. May 16 16:38:12.122039 systemd[1]: session-11.scope: Deactivated successfully. May 16 16:38:12.122858 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. May 16 16:38:12.126709 systemd[1]: Started sshd@11-10.0.0.37:22-10.0.0.1:33150.service - OpenSSH per-connection server daemon (10.0.0.1:33150). May 16 16:38:12.127609 systemd-logind[1582]: Removed session 11. May 16 16:38:12.168798 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 33150 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:12.170334 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:12.175022 systemd-logind[1582]: New session 12 of user core. May 16 16:38:12.180399 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 16:38:12.388159 sshd[5291]: Connection closed by 10.0.0.1 port 33150 May 16 16:38:12.388809 sshd-session[5289]: pam_unix(sshd:session): session closed for user core May 16 16:38:12.401088 systemd[1]: sshd@11-10.0.0.37:22-10.0.0.1:33150.service: Deactivated successfully. May 16 16:38:12.403319 systemd[1]: session-12.scope: Deactivated successfully. May 16 16:38:12.404801 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. May 16 16:38:12.408235 systemd[1]: Started sshd@12-10.0.0.37:22-10.0.0.1:33158.service - OpenSSH per-connection server daemon (10.0.0.1:33158). May 16 16:38:12.409509 systemd-logind[1582]: Removed session 12. May 16 16:38:12.450075 sshd[5303]: Accepted publickey for core from 10.0.0.1 port 33158 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:12.451567 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:12.457573 systemd-logind[1582]: New session 13 of user core. May 16 16:38:12.463404 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 16:38:12.608828 sshd[5305]: Connection closed by 10.0.0.1 port 33158 May 16 16:38:12.609179 sshd-session[5303]: pam_unix(sshd:session): session closed for user core May 16 16:38:12.614188 systemd[1]: sshd@12-10.0.0.37:22-10.0.0.1:33158.service: Deactivated successfully. May 16 16:38:12.616523 systemd[1]: session-13.scope: Deactivated successfully. May 16 16:38:12.617304 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. May 16 16:38:12.618711 systemd-logind[1582]: Removed session 13. May 16 16:38:12.642609 systemd-networkd[1494]: cali5cdf53eef61: Gained IPv6LL May 16 16:38:12.770934 kubelet[2712]: E0516 16:38:12.770899 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:12.898605 systemd-networkd[1494]: cali14e6b087c92: Gained IPv6LL May 16 16:38:13.495134 kubelet[2712]: I0516 16:38:13.495041 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f85b45fd-cdvx9" podStartSLOduration=42.495020792 podStartE2EDuration="42.495020792s" podCreationTimestamp="2025-05-16 16:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:38:13.012368424 +0000 UTC m=+55.731261225" watchObservedRunningTime="2025-05-16 16:38:13.495020792 +0000 UTC m=+56.213913603" May 16 16:38:13.495134 kubelet[2712]: I0516 16:38:13.495131 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tj5l7" podStartSLOduration=50.49512652 podStartE2EDuration="50.49512652s" podCreationTimestamp="2025-05-16 16:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:38:13.494439932 +0000 UTC m=+56.213332733" watchObservedRunningTime="2025-05-16 16:38:13.49512652 +0000 UTC m=+56.214019331" May 16 16:38:13.602478 systemd-networkd[1494]: calib2265863afa: Gained IPv6LL May 16 16:38:13.773125 kubelet[2712]: E0516 16:38:13.772950 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:14.774668 kubelet[2712]: E0516 16:38:14.774631 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:17.625003 systemd[1]: Started sshd@13-10.0.0.37:22-10.0.0.1:46702.service - OpenSSH per-connection server daemon (10.0.0.1:46702). May 16 16:38:17.637664 containerd[1606]: time="2025-05-16T16:38:17.637625512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:17.638475 containerd[1606]: time="2025-05-16T16:38:17.638430373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 16 16:38:17.639864 containerd[1606]: time="2025-05-16T16:38:17.639830231Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:17.641681 containerd[1606]: time="2025-05-16T16:38:17.641650889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:38:17.642218 containerd[1606]: time="2025-05-16T16:38:17.642189019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 5.541827463s" May 16 16:38:17.642254 containerd[1606]: time="2025-05-16T16:38:17.642218364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 16 16:38:17.643238 containerd[1606]: time="2025-05-16T16:38:17.643208232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 16:38:17.653182 containerd[1606]: time="2025-05-16T16:38:17.653139395Z" level=info msg="CreateContainer within sandbox \"ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 16:38:17.661988 containerd[1606]: time="2025-05-16T16:38:17.661593385Z" level=info msg="Container c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692: CDI devices from CRI Config.CDIDevices: []" May 16 16:38:17.679104 containerd[1606]: time="2025-05-16T16:38:17.678974663Z" level=info msg="CreateContainer within sandbox \"ebf0b7b421b38f594a8a1da4d5503bcd9bf0aaafbbe20e586e0200c4686b7526\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692\"" May 16 16:38:17.680124 containerd[1606]: time="2025-05-16T16:38:17.679916692Z" level=info msg="StartContainer for \"c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692\"" May 16 16:38:17.681769 containerd[1606]: time="2025-05-16T16:38:17.681746456Z" level=info msg="connecting to shim c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692" address="unix:///run/containerd/s/e7bae3ee05ba18d6661108603627e3fd173af3757a3445b7978eb6858114457d" protocol=ttrpc version=3 May 16 16:38:17.704482 systemd[1]: Started cri-containerd-c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692.scope - libcontainer container c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692. May 16 16:38:17.709538 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 46702 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:17.711199 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:17.716349 systemd-logind[1582]: New session 14 of user core. May 16 16:38:17.721454 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 16:38:17.764508 containerd[1606]: time="2025-05-16T16:38:17.764349551Z" level=info msg="StartContainer for \"c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692\" returns successfully" May 16 16:38:17.798735 kubelet[2712]: I0516 16:38:17.798657 2712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5487f9d78-q6gxw" podStartSLOduration=38.255425063 podStartE2EDuration="43.798524013s" podCreationTimestamp="2025-05-16 16:37:34 +0000 UTC" firstStartedPulling="2025-05-16 16:38:12.099999005 +0000 UTC m=+54.818891816" lastFinishedPulling="2025-05-16 16:38:17.643097955 +0000 UTC m=+60.361990766" observedRunningTime="2025-05-16 16:38:17.798501711 +0000 UTC m=+60.517394523" watchObservedRunningTime="2025-05-16 16:38:17.798524013 +0000 UTC m=+60.517416824" May 16 16:38:17.845575 containerd[1606]: time="2025-05-16T16:38:17.845533541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692\" id:\"faa87e74f0b4bec09031c17750062192c775f7e59ec97ca0541f3f66a97a4ec6\" pid:5404 exited_at:{seconds:1747413497 nanos:845001993}" May 16 16:38:17.873956 containerd[1606]: time="2025-05-16T16:38:17.873896695Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:17.880602 sshd[5362]: Connection closed by 10.0.0.1 port 46702 May 16 16:38:17.881564 sshd-session[5338]: pam_unix(sshd:session): session closed for user core May 16 16:38:17.886203 systemd[1]: sshd@13-10.0.0.37:22-10.0.0.1:46702.service: Deactivated successfully. May 16 16:38:17.888356 systemd[1]: session-14.scope: Deactivated successfully. May 16 16:38:17.889046 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. May 16 16:38:17.890665 systemd-logind[1582]: Removed session 14. May 16 16:38:17.944791 containerd[1606]: time="2025-05-16T16:38:17.944726799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:17.944918 containerd[1606]: time="2025-05-16T16:38:17.944789727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 16:38:17.944989 kubelet[2712]: E0516 16:38:17.944950 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:38:17.945124 kubelet[2712]: E0516 16:38:17.944997 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:38:17.945165 kubelet[2712]: E0516 16:38:17.945125 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4eda0e24a84d1ba6397664228d76dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kf26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5784d44d8b-w7g52_calico-system(d1830161-185c-4edf-ba88-ac20dff9bb5d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:17.947242 containerd[1606]: time="2025-05-16T16:38:17.947213587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 16:38:18.178090 containerd[1606]: time="2025-05-16T16:38:18.177917843Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:18.179233 containerd[1606]: time="2025-05-16T16:38:18.179189389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:18.179366 containerd[1606]: time="2025-05-16T16:38:18.179251777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 16:38:18.179453 kubelet[2712]: E0516 16:38:18.179395 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:38:18.179453 kubelet[2712]: E0516 16:38:18.179452 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:38:18.179684 kubelet[2712]: E0516 16:38:18.179567 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kf26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5784d44d8b-w7g52_calico-system(d1830161-185c-4edf-ba88-ac20dff9bb5d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:18.180846 kubelet[2712]: E0516 16:38:18.180781 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5784d44d8b-w7g52" podUID="d1830161-185c-4edf-ba88-ac20dff9bb5d" May 16 16:38:18.372598 containerd[1606]: time="2025-05-16T16:38:18.372482615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 16:38:18.635233 containerd[1606]: time="2025-05-16T16:38:18.635165814Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:18.636548 containerd[1606]: time="2025-05-16T16:38:18.636485641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:18.636548 containerd[1606]: time="2025-05-16T16:38:18.636534453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 16:38:18.636830 kubelet[2712]: E0516 16:38:18.636784 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:38:18.636894 kubelet[2712]: E0516 16:38:18.636842 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:38:18.637073 kubelet[2712]: E0516 16:38:18.637020 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hrrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-bv4q6_calico-system(bf47523c-1e81-4bbe-a80b-55b0036c2140): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:18.638243 kubelet[2712]: E0516 16:38:18.638178 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-bv4q6" podUID="bf47523c-1e81-4bbe-a80b-55b0036c2140" May 16 16:38:22.897953 systemd[1]: Started sshd@14-10.0.0.37:22-10.0.0.1:46704.service - OpenSSH per-connection server daemon (10.0.0.1:46704). May 16 16:38:22.943009 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 46704 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:22.944423 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:22.948602 systemd-logind[1582]: New session 15 of user core. May 16 16:38:22.955455 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 16:38:23.061601 sshd[5432]: Connection closed by 10.0.0.1 port 46704 May 16 16:38:23.062505 sshd-session[5430]: pam_unix(sshd:session): session closed for user core May 16 16:38:23.066886 systemd[1]: sshd@14-10.0.0.37:22-10.0.0.1:46704.service: Deactivated successfully. May 16 16:38:23.069908 systemd[1]: session-15.scope: Deactivated successfully. May 16 16:38:23.071943 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. May 16 16:38:23.073878 systemd-logind[1582]: Removed session 15. May 16 16:38:28.075717 systemd[1]: Started sshd@15-10.0.0.37:22-10.0.0.1:53580.service - OpenSSH per-connection server daemon (10.0.0.1:53580). May 16 16:38:28.122863 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 53580 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:28.124416 sshd-session[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:28.128940 systemd-logind[1582]: New session 16 of user core. May 16 16:38:28.140468 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 16:38:28.262608 sshd[5451]: Connection closed by 10.0.0.1 port 53580 May 16 16:38:28.263049 sshd-session[5449]: pam_unix(sshd:session): session closed for user core May 16 16:38:28.268493 systemd[1]: sshd@15-10.0.0.37:22-10.0.0.1:53580.service: Deactivated successfully. May 16 16:38:28.270570 systemd[1]: session-16.scope: Deactivated successfully. May 16 16:38:28.271303 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. May 16 16:38:28.272741 systemd-logind[1582]: Removed session 16. May 16 16:38:29.311223 kubelet[2712]: I0516 16:38:29.311130 2712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:38:29.352885 containerd[1606]: time="2025-05-16T16:38:29.352834425Z" level=info msg="StopContainer for \"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" with timeout 30 (s)" May 16 16:38:29.365599 containerd[1606]: time="2025-05-16T16:38:29.365516768Z" level=info msg="Stop container \"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" with signal terminated" May 16 16:38:29.379535 kubelet[2712]: E0516 16:38:29.379414 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bv4q6" podUID="bf47523c-1e81-4bbe-a80b-55b0036c2140" May 16 16:38:29.399125 systemd[1]: cri-containerd-9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335.scope: Deactivated successfully. May 16 16:38:29.404231 containerd[1606]: time="2025-05-16T16:38:29.404181456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" id:\"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" pid:5220 exit_status:1 exited_at:{seconds:1747413509 nanos:403046795}" May 16 16:38:29.405031 containerd[1606]: time="2025-05-16T16:38:29.405004176Z" level=info msg="received exit event container_id:\"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" id:\"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" pid:5220 exit_status:1 exited_at:{seconds:1747413509 nanos:403046795}" May 16 16:38:29.428788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335-rootfs.mount: Deactivated successfully. May 16 16:38:29.619412 containerd[1606]: time="2025-05-16T16:38:29.619366561Z" level=info msg="StopContainer for \"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" returns successfully" May 16 16:38:29.620099 containerd[1606]: time="2025-05-16T16:38:29.620046224Z" level=info msg="StopPodSandbox for \"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\"" May 16 16:38:29.625665 containerd[1606]: time="2025-05-16T16:38:29.625615319Z" level=info msg="Container to stop \"9533464e6f8f111befde3b8084be49902469ee5975dfe67f2633b483a6da8335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:29.633070 systemd[1]: cri-containerd-a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9.scope: Deactivated successfully. May 16 16:38:29.634359 containerd[1606]: time="2025-05-16T16:38:29.634307535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\" id:\"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\" pid:5081 exit_status:137 exited_at:{seconds:1747413509 nanos:633778494}" May 16 16:38:29.672475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9-rootfs.mount: Deactivated successfully. May 16 16:38:29.679919 containerd[1606]: time="2025-05-16T16:38:29.679803196Z" level=info msg="shim disconnected" id=a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9 namespace=k8s.io May 16 16:38:29.679919 containerd[1606]: time="2025-05-16T16:38:29.679839587Z" level=warning msg="cleaning up after shim disconnected" id=a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9 namespace=k8s.io May 16 16:38:29.683726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9-shm.mount: Deactivated successfully. May 16 16:38:29.693698 containerd[1606]: time="2025-05-16T16:38:29.679848274Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:38:29.719974 containerd[1606]: time="2025-05-16T16:38:29.718579070Z" level=info msg="received exit event sandbox_id:\"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\" exit_status:137 exited_at:{seconds:1747413509 nanos:633778494}" May 16 16:38:29.742744 systemd-networkd[1494]: calib2265863afa: Link DOWN May 16 16:38:29.742758 systemd-networkd[1494]: calib2265863afa: Lost carrier May 16 16:38:29.810446 kubelet[2712]: I0516 16:38:29.810375 2712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.741 [INFO][5523] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.741 [INFO][5523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" iface="eth0" netns="/var/run/netns/cni-d4725fa2-1ae6-51c9-6c79-452b45073773" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.741 [INFO][5523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" iface="eth0" netns="/var/run/netns/cni-d4725fa2-1ae6-51c9-6c79-452b45073773" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.749 [INFO][5523] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" after=7.649713ms iface="eth0" netns="/var/run/netns/cni-d4725fa2-1ae6-51c9-6c79-452b45073773" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.749 [INFO][5523] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.749 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.772 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" HandleID="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Workload="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.773 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.773 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.930 [INFO][5547] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" HandleID="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Workload="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.930 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" HandleID="k8s-pod-network.a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" Workload="localhost-k8s-calico--apiserver--5f85b45fd--cdvx9-eth0" May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.932 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:29.939860 containerd[1606]: 2025-05-16 16:38:29.935 [INFO][5523] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9" May 16 16:38:29.942775 systemd[1]: run-netns-cni\x2dd4725fa2\x2d1ae6\x2d51c9\x2d6c79\x2d452b45073773.mount: Deactivated successfully. May 16 16:38:29.948706 containerd[1606]: time="2025-05-16T16:38:29.948647068Z" level=info msg="TearDown network for sandbox \"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\" successfully" May 16 16:38:29.948706 containerd[1606]: time="2025-05-16T16:38:29.948694529Z" level=info msg="StopPodSandbox for \"a8a170fbf50510a1fee17784f1d76ae0c35ac766def8a03a21bc440611b0c2b9\" returns successfully" May 16 16:38:30.078672 kubelet[2712]: I0516 16:38:30.078616 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29f2s\" (UniqueName: \"kubernetes.io/projected/4935dd62-d692-4f3f-b085-7e695611704c-kube-api-access-29f2s\") pod \"4935dd62-d692-4f3f-b085-7e695611704c\" (UID: \"4935dd62-d692-4f3f-b085-7e695611704c\") " May 16 16:38:30.078672 kubelet[2712]: I0516 16:38:30.078671 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4935dd62-d692-4f3f-b085-7e695611704c-calico-apiserver-certs\") pod \"4935dd62-d692-4f3f-b085-7e695611704c\" (UID: \"4935dd62-d692-4f3f-b085-7e695611704c\") " May 16 16:38:30.083714 kubelet[2712]: I0516 16:38:30.083667 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4935dd62-d692-4f3f-b085-7e695611704c-kube-api-access-29f2s" (OuterVolumeSpecName: "kube-api-access-29f2s") pod "4935dd62-d692-4f3f-b085-7e695611704c" (UID: "4935dd62-d692-4f3f-b085-7e695611704c"). InnerVolumeSpecName "kube-api-access-29f2s". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:38:30.084606 kubelet[2712]: I0516 16:38:30.084532 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4935dd62-d692-4f3f-b085-7e695611704c-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "4935dd62-d692-4f3f-b085-7e695611704c" (UID: "4935dd62-d692-4f3f-b085-7e695611704c"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 16:38:30.085539 systemd[1]: var-lib-kubelet-pods-4935dd62\x2dd692\x2d4f3f\x2db085\x2d7e695611704c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29f2s.mount: Deactivated successfully. May 16 16:38:30.085664 systemd[1]: var-lib-kubelet-pods-4935dd62\x2dd692\x2d4f3f\x2db085\x2d7e695611704c-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 16 16:38:30.179867 kubelet[2712]: I0516 16:38:30.179803 2712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29f2s\" (UniqueName: \"kubernetes.io/projected/4935dd62-d692-4f3f-b085-7e695611704c-kube-api-access-29f2s\") on node \"localhost\" DevicePath \"\"" May 16 16:38:30.179867 kubelet[2712]: I0516 16:38:30.179845 2712 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4935dd62-d692-4f3f-b085-7e695611704c-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 16 16:38:30.372736 kubelet[2712]: E0516 16:38:30.372685 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5784d44d8b-w7g52" podUID="d1830161-185c-4edf-ba88-ac20dff9bb5d" May 16 16:38:30.823736 systemd[1]: Removed slice kubepods-besteffort-pod4935dd62_d692_4f3f_b085_7e695611704c.slice - libcontainer container kubepods-besteffort-pod4935dd62_d692_4f3f_b085_7e695611704c.slice. May 16 16:38:30.823836 systemd[1]: kubepods-besteffort-pod4935dd62_d692_4f3f_b085_7e695611704c.slice: Consumed 1.028s CPU time, 40.3M memory peak. May 16 16:38:31.373808 kubelet[2712]: I0516 16:38:31.373754 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4935dd62-d692-4f3f-b085-7e695611704c" path="/var/lib/kubelet/pods/4935dd62-d692-4f3f-b085-7e695611704c/volumes" May 16 16:38:32.371174 kubelet[2712]: E0516 16:38:32.371123 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:33.282372 systemd[1]: Started sshd@16-10.0.0.37:22-10.0.0.1:53584.service - OpenSSH per-connection server daemon (10.0.0.1:53584). May 16 16:38:33.341799 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 53584 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:33.343603 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:33.348429 systemd-logind[1582]: New session 17 of user core. May 16 16:38:33.356471 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 16:38:33.476714 sshd[5570]: Connection closed by 10.0.0.1 port 53584 May 16 16:38:33.477031 sshd-session[5568]: pam_unix(sshd:session): session closed for user core May 16 16:38:33.482590 systemd[1]: sshd@16-10.0.0.37:22-10.0.0.1:53584.service: Deactivated successfully. May 16 16:38:33.485543 systemd[1]: session-17.scope: Deactivated successfully. May 16 16:38:33.486472 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. May 16 16:38:33.488099 systemd-logind[1582]: Removed session 17. May 16 16:38:36.610899 containerd[1606]: time="2025-05-16T16:38:36.610859817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4ae23a4d938d66a3d4dc6a28291a70c42c9a76c0d21db05400db7f2eca3d692\" id:\"a9003e0c954d40473c4ba296a97f0e3030c5e34188acae465035aab94997c3c0\" pid:5598 exited_at:{seconds:1747413516 nanos:610686214}" May 16 16:38:38.494929 systemd[1]: Started sshd@17-10.0.0.37:22-10.0.0.1:43268.service - OpenSSH per-connection server daemon (10.0.0.1:43268). May 16 16:38:38.556132 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 43268 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:38.558295 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:38.564374 systemd-logind[1582]: New session 18 of user core. May 16 16:38:38.573403 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 16:38:38.706997 sshd[5612]: Connection closed by 10.0.0.1 port 43268 May 16 16:38:38.707313 sshd-session[5610]: pam_unix(sshd:session): session closed for user core May 16 16:38:38.719156 systemd[1]: sshd@17-10.0.0.37:22-10.0.0.1:43268.service: Deactivated successfully. May 16 16:38:38.721451 systemd[1]: session-18.scope: Deactivated successfully. May 16 16:38:38.722471 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. May 16 16:38:38.726428 systemd[1]: Started sshd@18-10.0.0.37:22-10.0.0.1:43278.service - OpenSSH per-connection server daemon (10.0.0.1:43278). May 16 16:38:38.727263 systemd-logind[1582]: Removed session 18. May 16 16:38:38.769820 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 43278 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:38.771564 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:38.776158 systemd-logind[1582]: New session 19 of user core. May 16 16:38:38.788496 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 16:38:39.058048 sshd[5628]: Connection closed by 10.0.0.1 port 43278 May 16 16:38:39.058907 sshd-session[5626]: pam_unix(sshd:session): session closed for user core May 16 16:38:39.069699 systemd[1]: sshd@18-10.0.0.37:22-10.0.0.1:43278.service: Deactivated successfully. May 16 16:38:39.072037 systemd[1]: session-19.scope: Deactivated successfully. May 16 16:38:39.072940 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. May 16 16:38:39.076385 systemd[1]: Started sshd@19-10.0.0.37:22-10.0.0.1:43292.service - OpenSSH per-connection server daemon (10.0.0.1:43292). May 16 16:38:39.077458 systemd-logind[1582]: Removed session 19. May 16 16:38:39.137251 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 43292 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:39.138763 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:39.143634 systemd-logind[1582]: New session 20 of user core. May 16 16:38:39.158445 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 16:38:40.979034 sshd[5642]: Connection closed by 10.0.0.1 port 43292 May 16 16:38:40.981070 sshd-session[5640]: pam_unix(sshd:session): session closed for user core May 16 16:38:40.989374 systemd[1]: sshd@19-10.0.0.37:22-10.0.0.1:43292.service: Deactivated successfully. May 16 16:38:40.991625 systemd[1]: session-20.scope: Deactivated successfully. May 16 16:38:40.991882 systemd[1]: session-20.scope: Consumed 604ms CPU time, 73M memory peak. May 16 16:38:40.992494 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. May 16 16:38:40.996008 systemd-logind[1582]: Removed session 20. May 16 16:38:41.000207 systemd[1]: Started sshd@20-10.0.0.37:22-10.0.0.1:43300.service - OpenSSH per-connection server daemon (10.0.0.1:43300). May 16 16:38:41.046477 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 43300 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:41.048133 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:41.052863 systemd-logind[1582]: New session 21 of user core. May 16 16:38:41.062419 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 16:38:41.373077 containerd[1606]: time="2025-05-16T16:38:41.372965861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 16:38:41.464313 sshd[5666]: Connection closed by 10.0.0.1 port 43300 May 16 16:38:41.465150 sshd-session[5664]: pam_unix(sshd:session): session closed for user core May 16 16:38:41.474655 systemd[1]: sshd@20-10.0.0.37:22-10.0.0.1:43300.service: Deactivated successfully. May 16 16:38:41.477106 systemd[1]: session-21.scope: Deactivated successfully. May 16 16:38:41.479995 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. May 16 16:38:41.482609 systemd[1]: Started sshd@21-10.0.0.37:22-10.0.0.1:43304.service - OpenSSH per-connection server daemon (10.0.0.1:43304). May 16 16:38:41.484374 systemd-logind[1582]: Removed session 21. May 16 16:38:41.527507 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 43304 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:41.529445 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:41.534318 systemd-logind[1582]: New session 22 of user core. May 16 16:38:41.539555 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 16:38:41.659565 sshd[5686]: Connection closed by 10.0.0.1 port 43304 May 16 16:38:41.659805 sshd-session[5684]: pam_unix(sshd:session): session closed for user core May 16 16:38:41.664529 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. May 16 16:38:41.664921 systemd[1]: sshd@21-10.0.0.37:22-10.0.0.1:43304.service: Deactivated successfully. May 16 16:38:41.669900 systemd[1]: session-22.scope: Deactivated successfully. May 16 16:38:41.673570 systemd-logind[1582]: Removed session 22. May 16 16:38:41.721415 containerd[1606]: time="2025-05-16T16:38:41.721367570Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:41.722670 containerd[1606]: time="2025-05-16T16:38:41.722551809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:41.722670 containerd[1606]: time="2025-05-16T16:38:41.722612976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 16:38:41.723086 kubelet[2712]: E0516 16:38:41.722807 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:38:41.723086 kubelet[2712]: E0516 16:38:41.722884 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:38:41.723086 kubelet[2712]: E0516 16:38:41.723020 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4eda0e24a84d1ba6397664228d76dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kf26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5784d44d8b-w7g52_calico-system(d1830161-185c-4edf-ba88-ac20dff9bb5d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:41.725197 containerd[1606]: time="2025-05-16T16:38:41.725148092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 16:38:41.729912 containerd[1606]: time="2025-05-16T16:38:41.729809883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8073e647c3a5f5282145b29ff860e3f6b2d5208a4b04502a351d7f418ecac1d3\" id:\"9d86a64a0385d54d3757b30c26831330d6e6cca256a2e64a5598dbd7cf42d7bc\" pid:5707 exited_at:{seconds:1747413521 nanos:729349501}" May 16 16:38:41.962989 containerd[1606]: time="2025-05-16T16:38:41.962783507Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:41.964577 containerd[1606]: time="2025-05-16T16:38:41.964411466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:41.964577 containerd[1606]: time="2025-05-16T16:38:41.964496960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 16:38:41.964831 kubelet[2712]: E0516 16:38:41.964730 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:38:41.964831 kubelet[2712]: E0516 16:38:41.964786 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:38:41.964971 kubelet[2712]: E0516 16:38:41.964933 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kf26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5784d44d8b-w7g52_calico-system(d1830161-185c-4edf-ba88-ac20dff9bb5d): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:41.966840 kubelet[2712]: E0516 16:38:41.966791 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-5784d44d8b-w7g52" podUID="d1830161-185c-4edf-ba88-ac20dff9bb5d" May 16 16:38:42.372451 containerd[1606]: time="2025-05-16T16:38:42.372370170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 16:38:42.607534 containerd[1606]: time="2025-05-16T16:38:42.607475191Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:38:42.629892 containerd[1606]: time="2025-05-16T16:38:42.629734820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:38:42.629892 containerd[1606]: time="2025-05-16T16:38:42.629779035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 16:38:42.630140 kubelet[2712]: E0516 16:38:42.630077 2712 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:38:42.630212 kubelet[2712]: E0516 16:38:42.630153 2712 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:38:42.630464 kubelet[2712]: E0516 16:38:42.630399 2712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hrrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-bv4q6_calico-system(bf47523c-1e81-4bbe-a80b-55b0036c2140): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:38:42.631599 kubelet[2712]: E0516 16:38:42.631568 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-bv4q6" podUID="bf47523c-1e81-4bbe-a80b-55b0036c2140" May 16 16:38:43.371861 kubelet[2712]: E0516 16:38:43.371798 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:45.790432 containerd[1606]: time="2025-05-16T16:38:45.790380848Z" level=info msg="StopContainer for \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" with timeout 30 (s)" May 16 16:38:45.792594 containerd[1606]: time="2025-05-16T16:38:45.792487896Z" level=info msg="Stop container \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" with signal terminated" May 16 16:38:45.812371 systemd[1]: cri-containerd-3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a.scope: Deactivated successfully. May 16 16:38:45.814475 containerd[1606]: time="2025-05-16T16:38:45.814402430Z" level=info msg="received exit event container_id:\"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" id:\"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" pid:4744 exit_status:1 exited_at:{seconds:1747413525 nanos:812999287}" May 16 16:38:45.817234 containerd[1606]: time="2025-05-16T16:38:45.816428713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" id:\"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" pid:4744 exit_status:1 exited_at:{seconds:1747413525 nanos:812999287}" May 16 16:38:45.853934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a-rootfs.mount: Deactivated successfully. May 16 16:38:45.869798 containerd[1606]: time="2025-05-16T16:38:45.869736962Z" level=info msg="StopContainer for \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" returns successfully" May 16 16:38:45.870310 containerd[1606]: time="2025-05-16T16:38:45.870238181Z" level=info msg="StopPodSandbox for \"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\"" May 16 16:38:45.870472 containerd[1606]: time="2025-05-16T16:38:45.870453411Z" level=info msg="Container to stop \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 16:38:45.879425 systemd[1]: cri-containerd-9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344.scope: Deactivated successfully. May 16 16:38:45.882885 containerd[1606]: time="2025-05-16T16:38:45.882636643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\" id:\"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\" pid:4394 exit_status:137 exited_at:{seconds:1747413525 nanos:881757602}" May 16 16:38:45.950903 containerd[1606]: time="2025-05-16T16:38:45.950819728Z" level=info msg="shim disconnected" id=9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344 namespace=k8s.io May 16 16:38:45.950903 containerd[1606]: time="2025-05-16T16:38:45.950874343Z" level=warning msg="cleaning up after shim disconnected" id=9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344 namespace=k8s.io May 16 16:38:45.951271 containerd[1606]: time="2025-05-16T16:38:45.950886205Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 16:38:45.952890 containerd[1606]: time="2025-05-16T16:38:45.952500591Z" level=info msg="received exit event sandbox_id:\"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\" exit_status:137 exited_at:{seconds:1747413525 nanos:881757602}" May 16 16:38:45.952948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344-rootfs.mount: Deactivated successfully. May 16 16:38:45.959606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344-shm.mount: Deactivated successfully. May 16 16:38:46.162712 systemd-networkd[1494]: cali0bdb184a241: Link DOWN May 16 16:38:46.163153 systemd-networkd[1494]: cali0bdb184a241: Lost carrier May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.159 [INFO][5797] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.161 [INFO][5797] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" iface="eth0" netns="/var/run/netns/cni-c081adb7-863d-4d3d-ec5b-22c622d33caf" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.161 [INFO][5797] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" iface="eth0" netns="/var/run/netns/cni-c081adb7-863d-4d3d-ec5b-22c622d33caf" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.169 [INFO][5797] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" after=8.312894ms iface="eth0" netns="/var/run/netns/cni-c081adb7-863d-4d3d-ec5b-22c622d33caf" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.169 [INFO][5797] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.169 [INFO][5797] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.196 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" HandleID="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Workload="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.196 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.196 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.230 [INFO][5810] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" HandleID="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Workload="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.230 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" HandleID="k8s-pod-network.9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" Workload="localhost-k8s-calico--apiserver--5f85b45fd--5gtdb-eth0" May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.232 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:38:46.239961 containerd[1606]: 2025-05-16 16:38:46.236 [INFO][5797] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344" May 16 16:38:46.243939 systemd[1]: run-netns-cni\x2dc081adb7\x2d863d\x2d4d3d\x2dec5b\x2d22c622d33caf.mount: Deactivated successfully. May 16 16:38:46.244686 containerd[1606]: time="2025-05-16T16:38:46.244494822Z" level=info msg="TearDown network for sandbox \"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\" successfully" May 16 16:38:46.244686 containerd[1606]: time="2025-05-16T16:38:46.244540249Z" level=info msg="StopPodSandbox for \"9f666b738e6a4f60dcc5c03599b83344968f4acea3c79e57b9e9e598d6a57344\" returns successfully" May 16 16:38:46.381555 kubelet[2712]: I0516 16:38:46.381487 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a58fb16-fb0c-422a-a589-02591230be6e-calico-apiserver-certs\") pod \"9a58fb16-fb0c-422a-a589-02591230be6e\" (UID: \"9a58fb16-fb0c-422a-a589-02591230be6e\") " May 16 16:38:46.381555 kubelet[2712]: I0516 16:38:46.381542 2712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dpnt\" (UniqueName: \"kubernetes.io/projected/9a58fb16-fb0c-422a-a589-02591230be6e-kube-api-access-8dpnt\") pod \"9a58fb16-fb0c-422a-a589-02591230be6e\" (UID: \"9a58fb16-fb0c-422a-a589-02591230be6e\") " May 16 16:38:46.388857 kubelet[2712]: I0516 16:38:46.388818 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a58fb16-fb0c-422a-a589-02591230be6e-kube-api-access-8dpnt" (OuterVolumeSpecName: "kube-api-access-8dpnt") pod "9a58fb16-fb0c-422a-a589-02591230be6e" (UID: "9a58fb16-fb0c-422a-a589-02591230be6e"). InnerVolumeSpecName "kube-api-access-8dpnt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 16:38:46.389380 kubelet[2712]: I0516 16:38:46.389323 2712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a58fb16-fb0c-422a-a589-02591230be6e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9a58fb16-fb0c-422a-a589-02591230be6e" (UID: "9a58fb16-fb0c-422a-a589-02591230be6e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 16:38:46.390756 systemd[1]: var-lib-kubelet-pods-9a58fb16\x2dfb0c\x2d422a\x2da589\x2d02591230be6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8dpnt.mount: Deactivated successfully. May 16 16:38:46.390879 systemd[1]: var-lib-kubelet-pods-9a58fb16\x2dfb0c\x2d422a\x2da589\x2d02591230be6e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 16 16:38:46.482746 kubelet[2712]: I0516 16:38:46.482611 2712 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a58fb16-fb0c-422a-a589-02591230be6e-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 16 16:38:46.482746 kubelet[2712]: I0516 16:38:46.482645 2712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dpnt\" (UniqueName: \"kubernetes.io/projected/9a58fb16-fb0c-422a-a589-02591230be6e-kube-api-access-8dpnt\") on node \"localhost\" DevicePath \"\"" May 16 16:38:46.672601 systemd[1]: Started sshd@22-10.0.0.37:22-10.0.0.1:55086.service - OpenSSH per-connection server daemon (10.0.0.1:55086). May 16 16:38:46.740888 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 55086 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:46.742404 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:46.748999 systemd-logind[1582]: New session 23 of user core. May 16 16:38:46.755403 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 16:38:46.858306 kubelet[2712]: I0516 16:38:46.856266 2712 scope.go:117] "RemoveContainer" containerID="3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a" May 16 16:38:46.861271 containerd[1606]: time="2025-05-16T16:38:46.861242659Z" level=info msg="RemoveContainer for \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\"" May 16 16:38:46.867106 systemd[1]: Removed slice kubepods-besteffort-pod9a58fb16_fb0c_422a_a589_02591230be6e.slice - libcontainer container kubepods-besteffort-pod9a58fb16_fb0c_422a_a589_02591230be6e.slice. May 16 16:38:46.875914 containerd[1606]: time="2025-05-16T16:38:46.875760911Z" level=info msg="RemoveContainer for \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" returns successfully" May 16 16:38:46.876117 kubelet[2712]: I0516 16:38:46.876068 2712 scope.go:117] "RemoveContainer" containerID="3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a" May 16 16:38:46.876645 containerd[1606]: time="2025-05-16T16:38:46.876618429Z" level=error msg="ContainerStatus for \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\": not found" May 16 16:38:46.876939 kubelet[2712]: E0516 16:38:46.876922 2712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\": not found" containerID="3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a" May 16 16:38:46.877095 kubelet[2712]: I0516 16:38:46.877026 2712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a"} err="failed to get container status \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3efba425dc6c7bedcec70716de900482d1dce1415d3701e8b012da7874d3115a\": not found" May 16 16:38:46.892816 sshd[5826]: Connection closed by 10.0.0.1 port 55086 May 16 16:38:46.892987 sshd-session[5824]: pam_unix(sshd:session): session closed for user core May 16 16:38:46.898440 systemd[1]: sshd@22-10.0.0.37:22-10.0.0.1:55086.service: Deactivated successfully. May 16 16:38:46.901329 systemd[1]: session-23.scope: Deactivated successfully. May 16 16:38:46.902155 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. May 16 16:38:46.903899 systemd-logind[1582]: Removed session 23. May 16 16:38:47.374302 kubelet[2712]: I0516 16:38:47.374236 2712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a58fb16-fb0c-422a-a589-02591230be6e" path="/var/lib/kubelet/pods/9a58fb16-fb0c-422a-a589-02591230be6e/volumes" May 16 16:38:51.372775 kubelet[2712]: E0516 16:38:51.372723 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:51.908637 systemd[1]: Started sshd@23-10.0.0.37:22-10.0.0.1:55098.service - OpenSSH per-connection server daemon (10.0.0.1:55098). May 16 16:38:51.950688 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 55098 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:51.952471 sshd-session[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:51.956556 systemd-logind[1582]: New session 24 of user core. May 16 16:38:51.961407 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 16:38:52.075397 sshd[5846]: Connection closed by 10.0.0.1 port 55098 May 16 16:38:52.075731 sshd-session[5844]: pam_unix(sshd:session): session closed for user core May 16 16:38:52.080362 systemd[1]: sshd@23-10.0.0.37:22-10.0.0.1:55098.service: Deactivated successfully. May 16 16:38:52.082483 systemd[1]: session-24.scope: Deactivated successfully. May 16 16:38:52.083316 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. May 16 16:38:52.084461 systemd-logind[1582]: Removed session 24. May 16 16:38:52.371477 kubelet[2712]: E0516 16:38:52.371444 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:38:55.373446 kubelet[2712]: E0516 16:38:55.373396 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5784d44d8b-w7g52" podUID="d1830161-185c-4edf-ba88-ac20dff9bb5d" May 16 16:38:56.372766 kubelet[2712]: E0516 16:38:56.372702 2712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-bv4q6" podUID="bf47523c-1e81-4bbe-a80b-55b0036c2140" May 16 16:38:57.092480 systemd[1]: Started sshd@24-10.0.0.37:22-10.0.0.1:42340.service - OpenSSH per-connection server daemon (10.0.0.1:42340). May 16 16:38:57.135475 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 42340 ssh2: RSA SHA256:Wy0GtjAGKBMJZEstoKGtVndSgGKRDnpvy2VDQAg/LUo May 16 16:38:57.137179 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:38:57.141722 systemd-logind[1582]: New session 25 of user core. May 16 16:38:57.147457 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 16:38:57.263313 sshd[5865]: Connection closed by 10.0.0.1 port 42340 May 16 16:38:57.263654 sshd-session[5863]: pam_unix(sshd:session): session closed for user core May 16 16:38:57.268258 systemd[1]: sshd@24-10.0.0.37:22-10.0.0.1:42340.service: Deactivated successfully. May 16 16:38:57.270570 systemd[1]: session-25.scope: Deactivated successfully. May 16 16:38:57.271526 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. May 16 16:38:57.272846 systemd-logind[1582]: Removed session 25. May 16 16:38:58.371952 kubelet[2712]: E0516 16:38:58.371890 2712 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"