Sep 12 22:53:15.331068 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 12 22:53:15.331098 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:53:15.331112 kernel: BIOS-provided physical RAM map: Sep 12 22:53:15.331120 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 22:53:15.331129 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 22:53:15.331138 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 22:53:15.331148 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 22:53:15.331157 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 22:53:15.331169 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 22:53:15.331181 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 22:53:15.331190 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 12 22:53:15.331198 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 22:53:15.331207 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 22:53:15.331216 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 22:53:15.331227 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 22:53:15.331240 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 22:53:15.331252 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 22:53:15.331262 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 22:53:15.331271 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 22:53:15.331280 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 22:53:15.331290 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 22:53:15.331299 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 22:53:15.331309 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 22:53:15.331318 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 22:53:15.331327 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 22:53:15.331339 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 22:53:15.331349 kernel: NX (Execute Disable) protection: active Sep 12 22:53:15.331358 kernel: APIC: Static calls initialized Sep 12 22:53:15.331367 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 12 22:53:15.331377 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 12 22:53:15.331386 kernel: extended physical RAM map: Sep 12 22:53:15.331396 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 22:53:15.331405 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 22:53:15.331415 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 22:53:15.331424 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 22:53:15.331434 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 22:53:15.331446 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 22:53:15.331455 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 22:53:15.331465 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 12 22:53:15.331474 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 12 22:53:15.331498 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 12 22:53:15.331507 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 12 22:53:15.331519 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 12 22:53:15.331530 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 22:53:15.331540 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 22:53:15.331550 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 22:53:15.331559 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 22:53:15.331569 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 22:53:15.331579 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 22:53:15.331589 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 22:53:15.331598 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 22:53:15.331638 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 22:53:15.331652 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 22:53:15.331662 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 22:53:15.331671 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 22:53:15.331681 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 22:53:15.331691 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 22:53:15.331701 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 22:53:15.331714 kernel: efi: EFI v2.7 by EDK II Sep 12 22:53:15.331724 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 12 22:53:15.331734 kernel: random: crng init done Sep 12 22:53:15.331746 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 12 22:53:15.331756 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 12 22:53:15.331771 kernel: secureboot: Secure boot disabled Sep 12 22:53:15.331781 kernel: SMBIOS 2.8 present. Sep 12 22:53:15.331791 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 22:53:15.331800 kernel: DMI: Memory slots populated: 1/1 Sep 12 22:53:15.331810 kernel: Hypervisor detected: KVM Sep 12 22:53:15.331820 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 22:53:15.331829 kernel: kvm-clock: using sched offset of 5467826575 cycles Sep 12 22:53:15.331840 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 22:53:15.331850 kernel: tsc: Detected 2794.748 MHz processor Sep 12 22:53:15.331861 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 22:53:15.331871 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 22:53:15.331884 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 12 22:53:15.331895 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 22:53:15.331905 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 22:53:15.331915 kernel: Using GB pages for direct mapping Sep 12 22:53:15.331924 kernel: ACPI: Early table checksum verification disabled Sep 12 22:53:15.331934 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 22:53:15.331944 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 22:53:15.331953 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:53:15.331963 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:53:15.331977 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 22:53:15.331987 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:53:15.331998 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:53:15.332008 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:53:15.332018 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 22:53:15.332028 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 22:53:15.332038 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 22:53:15.332048 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 22:53:15.332061 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 22:53:15.332071 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 22:53:15.332079 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 22:53:15.332088 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 22:53:15.332098 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 22:53:15.332107 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 22:53:15.332117 kernel: No NUMA configuration found Sep 12 22:53:15.332127 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 12 22:53:15.332137 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 12 22:53:15.332148 kernel: Zone ranges: Sep 12 22:53:15.332161 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 22:53:15.332171 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 12 22:53:15.332181 kernel: Normal empty Sep 12 22:53:15.332191 kernel: Device empty Sep 12 22:53:15.332201 kernel: Movable zone start for each node Sep 12 22:53:15.332211 kernel: Early memory node ranges Sep 12 22:53:15.332221 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 22:53:15.332231 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 22:53:15.332245 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 22:53:15.332258 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 12 22:53:15.332268 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 12 22:53:15.332278 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 12 22:53:15.332288 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 12 22:53:15.332298 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 12 22:53:15.332308 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 12 22:53:15.332318 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 22:53:15.332331 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 22:53:15.332353 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 22:53:15.332363 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 22:53:15.332374 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 12 22:53:15.332384 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 12 22:53:15.332398 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 22:53:15.332408 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 22:53:15.332419 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 12 22:53:15.332429 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 22:53:15.332440 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 22:53:15.332453 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 22:53:15.332463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 22:53:15.332472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 22:53:15.332491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 22:53:15.332499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 22:53:15.332507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 22:53:15.332515 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 22:53:15.332523 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 22:53:15.332531 kernel: TSC deadline timer available Sep 12 22:53:15.332541 kernel: CPU topo: Max. logical packages: 1 Sep 12 22:53:15.332549 kernel: CPU topo: Max. logical dies: 1 Sep 12 22:53:15.332557 kernel: CPU topo: Max. dies per package: 1 Sep 12 22:53:15.332564 kernel: CPU topo: Max. threads per core: 1 Sep 12 22:53:15.332572 kernel: CPU topo: Num. cores per package: 4 Sep 12 22:53:15.332580 kernel: CPU topo: Num. threads per package: 4 Sep 12 22:53:15.332588 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 22:53:15.332596 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 22:53:15.332681 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 22:53:15.332697 kernel: kvm-guest: setup PV sched yield Sep 12 22:53:15.332708 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 22:53:15.332719 kernel: Booting paravirtualized kernel on KVM Sep 12 22:53:15.332729 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 22:53:15.332740 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 22:53:15.332751 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 22:53:15.332761 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 22:53:15.332772 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 22:53:15.332782 kernel: kvm-guest: PV spinlocks enabled Sep 12 22:53:15.332797 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 22:53:15.332809 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:53:15.332824 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 22:53:15.332834 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 22:53:15.332845 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 22:53:15.332855 kernel: Fallback order for Node 0: 0 Sep 12 22:53:15.332866 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 12 22:53:15.332877 kernel: Policy zone: DMA32 Sep 12 22:53:15.332890 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 22:53:15.332901 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 22:53:15.332911 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 22:53:15.332922 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 22:53:15.332932 kernel: Dynamic Preempt: voluntary Sep 12 22:53:15.332942 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 22:53:15.332954 kernel: rcu: RCU event tracing is enabled. Sep 12 22:53:15.332965 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 22:53:15.332975 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 22:53:15.332989 kernel: Rude variant of Tasks RCU enabled. Sep 12 22:53:15.332999 kernel: Tracing variant of Tasks RCU enabled. Sep 12 22:53:15.333010 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 22:53:15.333024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 22:53:15.333035 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:53:15.333046 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:53:15.333056 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 22:53:15.333067 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 22:53:15.333077 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 22:53:15.333091 kernel: Console: colour dummy device 80x25 Sep 12 22:53:15.333101 kernel: printk: legacy console [ttyS0] enabled Sep 12 22:53:15.333112 kernel: ACPI: Core revision 20240827 Sep 12 22:53:15.333123 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 22:53:15.333133 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 22:53:15.333144 kernel: x2apic enabled Sep 12 22:53:15.333154 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 22:53:15.333165 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 22:53:15.333175 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 22:53:15.333202 kernel: kvm-guest: setup PV IPIs Sep 12 22:53:15.333216 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 22:53:15.333227 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 22:53:15.333249 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 22:53:15.333260 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 22:53:15.333271 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 22:53:15.333281 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 22:53:15.333292 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 22:53:15.333302 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 22:53:15.333317 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 22:53:15.333339 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 22:53:15.333370 kernel: active return thunk: retbleed_return_thunk Sep 12 22:53:15.333391 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 22:53:15.333424 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 22:53:15.333436 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 22:53:15.333447 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 22:53:15.333458 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 22:53:15.333469 kernel: active return thunk: srso_return_thunk Sep 12 22:53:15.333513 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 22:53:15.333536 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 22:53:15.333547 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 22:53:15.333558 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 22:53:15.333568 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 22:53:15.333579 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 22:53:15.333590 kernel: Freeing SMP alternatives memory: 32K Sep 12 22:53:15.333601 kernel: pid_max: default: 32768 minimum: 301 Sep 12 22:53:15.333628 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 22:53:15.333643 kernel: landlock: Up and running. Sep 12 22:53:15.333653 kernel: SELinux: Initializing. Sep 12 22:53:15.333664 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:53:15.333675 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 22:53:15.333685 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 22:53:15.333696 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 22:53:15.333706 kernel: ... version: 0 Sep 12 22:53:15.333716 kernel: ... bit width: 48 Sep 12 22:53:15.333727 kernel: ... generic registers: 6 Sep 12 22:53:15.333740 kernel: ... value mask: 0000ffffffffffff Sep 12 22:53:15.333751 kernel: ... max period: 00007fffffffffff Sep 12 22:53:15.333761 kernel: ... fixed-purpose events: 0 Sep 12 22:53:15.333772 kernel: ... event mask: 000000000000003f Sep 12 22:53:15.333782 kernel: signal: max sigframe size: 1776 Sep 12 22:53:15.333792 kernel: rcu: Hierarchical SRCU implementation. Sep 12 22:53:15.333803 kernel: rcu: Max phase no-delay instances is 400. Sep 12 22:53:15.333817 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 22:53:15.333828 kernel: smp: Bringing up secondary CPUs ... Sep 12 22:53:15.333842 kernel: smpboot: x86: Booting SMP configuration: Sep 12 22:53:15.333852 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 22:53:15.333863 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 22:53:15.333873 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 22:53:15.333884 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 137196K reserved, 0K cma-reserved) Sep 12 22:53:15.333895 kernel: devtmpfs: initialized Sep 12 22:53:15.333905 kernel: x86/mm: Memory block size: 128MB Sep 12 22:53:15.333916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 22:53:15.333927 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 22:53:15.333940 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 12 22:53:15.333951 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 22:53:15.333961 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 12 22:53:15.333972 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 22:53:15.333983 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 22:53:15.333994 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 22:53:15.334004 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 22:53:15.334015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 22:53:15.334028 kernel: audit: initializing netlink subsys (disabled) Sep 12 22:53:15.334038 kernel: audit: type=2000 audit(1757717591.775:1): state=initialized audit_enabled=0 res=1 Sep 12 22:53:15.334049 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 22:53:15.334060 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 22:53:15.334070 kernel: cpuidle: using governor menu Sep 12 22:53:15.334081 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 22:53:15.334092 kernel: dca service started, version 1.12.1 Sep 12 22:53:15.334102 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 22:53:15.334113 kernel: PCI: Using configuration type 1 for base access Sep 12 22:53:15.334127 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 22:53:15.334138 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 22:53:15.334148 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 22:53:15.334159 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 22:53:15.334170 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 22:53:15.334180 kernel: ACPI: Added _OSI(Module Device) Sep 12 22:53:15.334190 kernel: ACPI: Added _OSI(Processor Device) Sep 12 22:53:15.334201 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 22:53:15.334211 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 22:53:15.334223 kernel: ACPI: Interpreter enabled Sep 12 22:53:15.334231 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 22:53:15.334239 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 22:53:15.334247 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 22:53:15.334255 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 22:53:15.334263 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 22:53:15.334271 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 22:53:15.334690 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 22:53:15.334865 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 22:53:15.335053 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 22:53:15.335070 kernel: PCI host bridge to bus 0000:00 Sep 12 22:53:15.335328 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 22:53:15.335468 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 22:53:15.335647 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 22:53:15.335791 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 22:53:15.335938 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 22:53:15.336098 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 22:53:15.336240 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 22:53:15.336460 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 22:53:15.336695 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 22:53:15.336878 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 22:53:15.337098 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 22:53:15.337262 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 22:53:15.337418 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 22:53:15.337569 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 19531 usecs Sep 12 22:53:15.337754 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 22:53:15.337920 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 22:53:15.338077 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 22:53:15.338270 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 22:53:15.338500 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 22:53:15.338685 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 22:53:15.338859 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 22:53:15.339027 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 22:53:15.339212 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 22:53:15.339366 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 22:53:15.339533 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 22:53:15.339687 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 22:53:15.339814 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 22:53:15.340029 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 22:53:15.340203 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 22:53:15.340416 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 22:53:15.340575 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 22:53:15.340733 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 22:53:15.340951 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 22:53:15.341133 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 22:53:15.341151 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 22:53:15.341162 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 22:53:15.341173 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 22:53:15.341184 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 22:53:15.341197 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 22:53:15.341206 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 22:53:15.341214 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 22:53:15.341222 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 22:53:15.341232 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 22:53:15.341243 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 22:53:15.341253 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 22:53:15.341264 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 22:53:15.341275 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 22:53:15.341289 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 22:53:15.341300 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 22:53:15.341311 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 22:53:15.341322 kernel: iommu: Default domain type: Translated Sep 12 22:53:15.341332 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 22:53:15.341343 kernel: efivars: Registered efivars operations Sep 12 22:53:15.341354 kernel: PCI: Using ACPI for IRQ routing Sep 12 22:53:15.341365 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 22:53:15.341376 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 22:53:15.341390 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 12 22:53:15.341401 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 12 22:53:15.341411 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 12 22:53:15.341422 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 12 22:53:15.341432 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 12 22:53:15.341443 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 12 22:53:15.341452 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 12 22:53:15.341636 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 22:53:15.341832 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 22:53:15.341976 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 22:53:15.341987 kernel: vgaarb: loaded Sep 12 22:53:15.341996 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 22:53:15.342004 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 22:53:15.342012 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 22:53:15.342020 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 22:53:15.342033 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 22:53:15.342056 kernel: pnp: PnP ACPI init Sep 12 22:53:15.342253 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 22:53:15.342272 kernel: pnp: PnP ACPI: found 6 devices Sep 12 22:53:15.342284 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 22:53:15.342295 kernel: NET: Registered PF_INET protocol family Sep 12 22:53:15.342307 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 22:53:15.342318 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 22:53:15.342329 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 22:53:15.342341 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 22:53:15.342357 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 22:53:15.342368 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 22:53:15.342379 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:53:15.342391 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 22:53:15.342402 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 22:53:15.342413 kernel: NET: Registered PF_XDP protocol family Sep 12 22:53:15.342588 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 22:53:15.342766 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 22:53:15.342917 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 22:53:15.343064 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 22:53:15.343205 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 22:53:15.343363 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 22:53:15.343516 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 22:53:15.343797 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 22:53:15.343814 kernel: PCI: CLS 0 bytes, default 64 Sep 12 22:53:15.343830 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 22:53:15.343850 kernel: Initialise system trusted keyrings Sep 12 22:53:15.343861 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 22:53:15.343872 kernel: Key type asymmetric registered Sep 12 22:53:15.343882 kernel: Asymmetric key parser 'x509' registered Sep 12 22:53:15.343893 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 22:53:15.343905 kernel: io scheduler mq-deadline registered Sep 12 22:53:15.343920 kernel: io scheduler kyber registered Sep 12 22:53:15.343928 kernel: io scheduler bfq registered Sep 12 22:53:15.343937 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 22:53:15.343947 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 22:53:15.343955 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 22:53:15.343964 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 22:53:15.343972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 22:53:15.343980 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 22:53:15.343989 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 22:53:15.343998 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 22:53:15.344010 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 22:53:15.344198 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 22:53:15.344223 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 22:53:15.344380 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 22:53:15.344540 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T22:53:14 UTC (1757717594) Sep 12 22:53:15.344718 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 22:53:15.344735 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 22:53:15.344750 kernel: efifb: probing for efifb Sep 12 22:53:15.344759 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 22:53:15.344767 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 22:53:15.344776 kernel: efifb: scrolling: redraw Sep 12 22:53:15.344784 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 22:53:15.344793 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 22:53:15.344802 kernel: fb0: EFI VGA frame buffer device Sep 12 22:53:15.344810 kernel: pstore: Using crash dump compression: deflate Sep 12 22:53:15.344819 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 22:53:15.344827 kernel: NET: Registered PF_INET6 protocol family Sep 12 22:53:15.344838 kernel: Segment Routing with IPv6 Sep 12 22:53:15.344847 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 22:53:15.344855 kernel: NET: Registered PF_PACKET protocol family Sep 12 22:53:15.344864 kernel: Key type dns_resolver registered Sep 12 22:53:15.344872 kernel: IPI shorthand broadcast: enabled Sep 12 22:53:15.344881 kernel: sched_clock: Marking stable (4035005446, 220813827)->(4301634971, -45815698) Sep 12 22:53:15.344890 kernel: registered taskstats version 1 Sep 12 22:53:15.344898 kernel: Loading compiled-in X.509 certificates Sep 12 22:53:15.344907 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 12 22:53:15.344918 kernel: Demotion targets for Node 0: null Sep 12 22:53:15.344926 kernel: Key type .fscrypt registered Sep 12 22:53:15.344935 kernel: Key type fscrypt-provisioning registered Sep 12 22:53:15.344944 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 22:53:15.344952 kernel: ima: Allocated hash algorithm: sha1 Sep 12 22:53:15.344961 kernel: ima: No architecture policies found Sep 12 22:53:15.344969 kernel: clk: Disabling unused clocks Sep 12 22:53:15.344978 kernel: Warning: unable to open an initial console. Sep 12 22:53:15.344990 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 12 22:53:15.344998 kernel: Write protecting the kernel read-only data: 24576k Sep 12 22:53:15.345007 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 12 22:53:15.345015 kernel: Run /init as init process Sep 12 22:53:15.345024 kernel: with arguments: Sep 12 22:53:15.345032 kernel: /init Sep 12 22:53:15.345040 kernel: with environment: Sep 12 22:53:15.345049 kernel: HOME=/ Sep 12 22:53:15.345057 kernel: TERM=linux Sep 12 22:53:15.345066 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 22:53:15.345078 systemd[1]: Successfully made /usr/ read-only. Sep 12 22:53:15.345091 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:53:15.345100 systemd[1]: Detected virtualization kvm. Sep 12 22:53:15.345109 systemd[1]: Detected architecture x86-64. Sep 12 22:53:15.345118 systemd[1]: Running in initrd. Sep 12 22:53:15.345130 systemd[1]: No hostname configured, using default hostname. Sep 12 22:53:15.345142 systemd[1]: Hostname set to . Sep 12 22:53:15.345159 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:53:15.345171 systemd[1]: Queued start job for default target initrd.target. Sep 12 22:53:15.345183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:53:15.345195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:53:15.345209 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 22:53:15.345223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:53:15.345236 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 22:53:15.345253 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 22:53:15.345266 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 22:53:15.345278 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 22:53:15.345291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:53:15.345304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:53:15.345319 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:53:15.345331 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:53:15.345343 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:53:15.345355 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:53:15.345364 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:53:15.345373 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:53:15.345383 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 22:53:15.345392 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 22:53:15.345401 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:53:15.345413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:53:15.345422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:53:15.345433 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:53:15.345445 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 22:53:15.345454 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:53:15.345464 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 22:53:15.345476 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 22:53:15.345503 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 22:53:15.345515 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:53:15.345527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:53:15.345538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:53:15.345554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 22:53:15.345566 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:53:15.345577 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 22:53:15.345589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:53:15.345702 systemd-journald[220]: Collecting audit messages is disabled. Sep 12 22:53:15.345731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:53:15.345743 systemd-journald[220]: Journal started Sep 12 22:53:15.345767 systemd-journald[220]: Runtime Journal (/run/log/journal/7eb62e37354e44179b507dbe611197fa) is 6M, max 48.4M, 42.4M free. Sep 12 22:53:15.335370 systemd-modules-load[223]: Inserted module 'overlay' Sep 12 22:53:15.349737 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 22:53:15.352751 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:53:15.367884 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:53:15.368330 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:53:15.373775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:53:15.379657 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 22:53:15.383081 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 12 22:53:15.383790 kernel: Bridge firewalling registered Sep 12 22:53:15.388215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:53:15.388822 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:53:15.396494 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 22:53:15.400804 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:53:15.403758 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:53:15.404335 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 22:53:15.417423 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:53:15.429746 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:53:15.433754 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:53:15.438659 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:53:15.507434 systemd-resolved[273]: Positive Trust Anchors: Sep 12 22:53:15.507453 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:53:15.507512 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:53:15.511266 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 12 22:53:15.512934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:53:15.519577 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:53:15.595680 kernel: SCSI subsystem initialized Sep 12 22:53:15.606992 kernel: Loading iSCSI transport class v2.0-870. Sep 12 22:53:15.620834 kernel: iscsi: registered transport (tcp) Sep 12 22:53:15.647097 kernel: iscsi: registered transport (qla4xxx) Sep 12 22:53:15.647190 kernel: QLogic iSCSI HBA Driver Sep 12 22:53:15.673911 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:53:15.698092 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:53:15.701507 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:53:15.792501 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 22:53:15.795055 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 22:53:15.867669 kernel: raid6: avx2x4 gen() 18988 MB/s Sep 12 22:53:15.884670 kernel: raid6: avx2x2 gen() 18238 MB/s Sep 12 22:53:15.902179 kernel: raid6: avx2x1 gen() 17165 MB/s Sep 12 22:53:15.902270 kernel: raid6: using algorithm avx2x4 gen() 18988 MB/s Sep 12 22:53:15.920690 kernel: raid6: .... xor() 5912 MB/s, rmw enabled Sep 12 22:53:15.920803 kernel: raid6: using avx2x2 recovery algorithm Sep 12 22:53:15.949691 kernel: xor: automatically using best checksumming function avx Sep 12 22:53:16.134647 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 22:53:16.144669 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:53:16.146651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:53:16.190029 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 12 22:53:16.198512 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:53:16.202748 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 22:53:16.237682 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 12 22:53:16.271303 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:53:16.274519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:53:16.364270 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:53:16.370749 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 22:53:16.424654 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 22:53:16.424751 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 22:53:16.428652 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 22:53:16.438763 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 22:53:16.451079 kernel: libata version 3.00 loaded. Sep 12 22:53:16.455649 kernel: AES CTR mode by8 optimization enabled Sep 12 22:53:16.459163 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 22:53:16.459202 kernel: GPT:9289727 != 19775487 Sep 12 22:53:16.459217 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 22:53:16.459240 kernel: GPT:9289727 != 19775487 Sep 12 22:53:16.460237 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 22:53:16.460263 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:53:16.473644 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 22:53:16.473932 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 22:53:16.475195 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 22:53:16.478912 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 22:53:16.479167 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 22:53:16.488818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:53:16.493741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:53:16.501434 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:53:16.504625 kernel: scsi host0: ahci Sep 12 22:53:16.505540 kernel: scsi host1: ahci Sep 12 22:53:16.507198 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:53:16.509722 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:53:16.521007 kernel: scsi host2: ahci Sep 12 22:53:16.521393 kernel: scsi host3: ahci Sep 12 22:53:16.521645 kernel: scsi host4: ahci Sep 12 22:53:16.521859 kernel: scsi host5: ahci Sep 12 22:53:16.523272 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 22:53:16.523343 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 22:53:16.525537 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 22:53:16.525571 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 22:53:16.526632 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 22:53:16.527647 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 22:53:16.537148 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 22:53:16.565480 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 22:53:16.569191 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:53:16.581739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 22:53:16.604471 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 22:53:16.618205 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:53:16.622168 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 22:53:16.884651 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 22:53:16.884769 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 22:53:16.885643 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 22:53:16.886650 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 22:53:16.887903 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 22:53:16.887926 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 22:53:16.888664 kernel: ata3.00: applying bridge limits Sep 12 22:53:16.889640 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 22:53:16.890641 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 22:53:16.891647 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 22:53:16.892634 kernel: ata3.00: configured for UDMA/100 Sep 12 22:53:16.894651 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 22:53:17.015273 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 22:53:17.015690 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 22:53:17.041644 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 22:53:17.419671 disk-uuid[637]: Primary Header is updated. Sep 12 22:53:17.419671 disk-uuid[637]: Secondary Entries is updated. Sep 12 22:53:17.419671 disk-uuid[637]: Secondary Header is updated. Sep 12 22:53:17.449124 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:53:17.454660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:53:17.550643 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 22:53:17.577771 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:53:17.617737 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:53:17.620365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:53:17.623777 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 22:53:17.663941 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:53:18.456681 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 22:53:18.457575 disk-uuid[643]: The operation has completed successfully. Sep 12 22:53:18.496092 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 22:53:18.496224 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 22:53:18.531086 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 22:53:18.656629 sh[667]: Success Sep 12 22:53:18.676669 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 22:53:18.676753 kernel: device-mapper: uevent: version 1.0.3 Sep 12 22:53:18.676770 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 22:53:18.688737 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 22:53:18.726935 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 22:53:18.731103 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 22:53:18.747776 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 22:53:18.755696 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (679) Sep 12 22:53:18.757964 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 12 22:53:18.757995 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:53:18.763218 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 22:53:18.763243 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 22:53:18.765061 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 22:53:18.765817 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:53:18.767150 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 22:53:18.768211 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 22:53:18.770125 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 22:53:18.800668 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (712) Sep 12 22:53:18.802690 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:53:18.802751 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:53:18.807633 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:53:18.807687 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:53:18.813636 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:53:18.814772 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 22:53:18.817758 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 22:53:18.909065 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:53:18.933251 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:53:18.993652 systemd-networkd[848]: lo: Link UP Sep 12 22:53:18.993664 systemd-networkd[848]: lo: Gained carrier Sep 12 22:53:18.998492 systemd-networkd[848]: Enumeration completed Sep 12 22:53:18.998964 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:53:18.999544 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:53:18.999567 systemd-networkd[848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:53:19.001315 systemd-networkd[848]: eth0: Link UP Sep 12 22:53:19.001794 systemd-networkd[848]: eth0: Gained carrier Sep 12 22:53:19.001815 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:53:19.007333 systemd[1]: Reached target network.target - Network. Sep 12 22:53:19.035768 systemd-networkd[848]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:53:19.051671 ignition[758]: Ignition 2.22.0 Sep 12 22:53:19.051706 ignition[758]: Stage: fetch-offline Sep 12 22:53:19.051844 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:53:19.051888 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:53:19.052048 ignition[758]: parsed url from cmdline: "" Sep 12 22:53:19.052054 ignition[758]: no config URL provided Sep 12 22:53:19.052065 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 22:53:19.052076 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 12 22:53:19.052145 ignition[758]: op(1): [started] loading QEMU firmware config module Sep 12 22:53:19.052152 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 22:53:19.083511 ignition[758]: op(1): [finished] loading QEMU firmware config module Sep 12 22:53:19.123134 ignition[758]: parsing config with SHA512: c3385bcf88ab370970da5785076fd192fb21a555edaf0d30f71dc7134839d633d5a073a803bf9c8d942f53a3da03ffbe5eae5a8fcdcccf92b68af5fdc2414cf1 Sep 12 22:53:19.127102 unknown[758]: fetched base config from "system" Sep 12 22:53:19.127119 unknown[758]: fetched user config from "qemu" Sep 12 22:53:19.127505 ignition[758]: fetch-offline: fetch-offline passed Sep 12 22:53:19.127576 ignition[758]: Ignition finished successfully Sep 12 22:53:19.131054 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:53:19.132684 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 22:53:19.133745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 22:53:19.234625 ignition[861]: Ignition 2.22.0 Sep 12 22:53:19.234637 ignition[861]: Stage: kargs Sep 12 22:53:19.234802 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:53:19.234815 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:53:19.235789 ignition[861]: kargs: kargs passed Sep 12 22:53:19.235837 ignition[861]: Ignition finished successfully Sep 12 22:53:19.240187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 22:53:19.242456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 22:53:19.290287 ignition[869]: Ignition 2.22.0 Sep 12 22:53:19.290305 ignition[869]: Stage: disks Sep 12 22:53:19.290516 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:53:19.290533 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:53:19.291463 ignition[869]: disks: disks passed Sep 12 22:53:19.295279 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 22:53:19.291523 ignition[869]: Ignition finished successfully Sep 12 22:53:19.297236 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 22:53:19.299464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 22:53:19.301504 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:53:19.303830 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:53:19.306238 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:53:19.307489 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 22:53:19.347473 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 22:53:19.361792 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 22:53:19.366319 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 22:53:19.543650 kernel: EXT4-fs (vda9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 12 22:53:19.544492 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 22:53:19.545325 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 22:53:19.550218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:53:19.552933 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 22:53:19.556192 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 22:53:19.556268 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 22:53:19.558386 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:53:19.567291 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 22:53:19.571781 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 22:53:19.579866 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 12 22:53:19.583201 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:53:19.583259 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:53:19.588066 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:53:19.588114 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:53:19.590632 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:53:19.630877 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 22:53:19.635850 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 12 22:53:19.641489 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 22:53:19.646992 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 22:53:19.753238 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 22:53:19.757489 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 22:53:19.759933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 22:53:19.787362 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 22:53:19.816559 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:53:19.834822 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 22:53:19.870862 ignition[1002]: INFO : Ignition 2.22.0 Sep 12 22:53:19.870862 ignition[1002]: INFO : Stage: mount Sep 12 22:53:19.872736 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:53:19.872736 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:53:19.872736 ignition[1002]: INFO : mount: mount passed Sep 12 22:53:19.872736 ignition[1002]: INFO : Ignition finished successfully Sep 12 22:53:19.878786 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 22:53:19.880851 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 22:53:19.911242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:53:19.950639 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 12 22:53:19.953003 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:53:19.953041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:53:19.955974 kernel: BTRFS info (device vda6): turning on async discard Sep 12 22:53:19.956003 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 22:53:19.957966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:53:19.993571 ignition[1031]: INFO : Ignition 2.22.0 Sep 12 22:53:19.993571 ignition[1031]: INFO : Stage: files Sep 12 22:53:19.995692 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:53:19.995692 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:53:19.995692 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 12 22:53:19.995692 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 22:53:19.995692 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 22:53:20.002470 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 22:53:20.002470 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 22:53:20.002470 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 22:53:20.002470 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 22:53:20.002470 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 22:53:19.999864 unknown[1031]: wrote ssh authorized keys file for user: core Sep 12 22:53:20.148111 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 22:53:20.703094 systemd-networkd[848]: eth0: Gained IPv6LL Sep 12 22:53:21.017738 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:53:21.068253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:53:21.230985 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:53:21.262181 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:53:21.262181 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:53:21.407188 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:53:21.407188 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:53:21.422217 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 22:53:21.852870 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 22:53:22.601289 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 22:53:22.601289 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 22:53:22.605699 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:53:22.692647 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:53:22.692647 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 22:53:22.692647 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 22:53:22.698729 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:53:22.698729 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 22:53:22.698729 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 22:53:22.705902 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 22:53:22.736555 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:53:22.781009 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 22:53:22.783820 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 22:53:22.783820 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 22:53:22.787306 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 22:53:22.787306 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:53:22.787306 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:53:22.787306 ignition[1031]: INFO : files: files passed Sep 12 22:53:22.787306 ignition[1031]: INFO : Ignition finished successfully Sep 12 22:53:22.794498 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 22:53:22.798329 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 22:53:22.801991 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 22:53:22.816221 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 22:53:22.816432 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 22:53:22.822162 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 22:53:22.825646 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:53:22.825646 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:53:22.829235 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:53:22.831302 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:53:22.835387 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 22:53:22.839188 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 22:53:22.993526 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 22:53:22.993711 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 22:53:22.994576 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 22:53:22.997049 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 22:53:22.997412 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 22:53:22.999301 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 22:53:23.037678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:53:23.039827 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 22:53:23.065975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:53:23.066196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:53:23.069603 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 22:53:23.071273 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 22:53:23.071463 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:53:23.074722 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 22:53:23.076986 systemd[1]: Stopped target basic.target - Basic System. Sep 12 22:53:23.079000 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 22:53:23.081050 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:53:23.083340 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 22:53:23.084548 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:53:23.086851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 22:53:23.089217 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:53:23.096588 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 22:53:23.098817 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 22:53:23.100849 systemd[1]: Stopped target swap.target - Swaps. Sep 12 22:53:23.101874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 22:53:23.102043 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:53:23.106397 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:53:23.107679 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:53:23.109904 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 22:53:23.111065 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:53:23.113267 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 22:53:23.113430 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 22:53:23.115139 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 22:53:23.115302 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:53:23.117553 systemd[1]: Stopped target paths.target - Path Units. Sep 12 22:53:23.120366 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 22:53:23.124705 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:53:23.127644 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 22:53:23.127850 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 22:53:23.129731 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 22:53:23.129871 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:53:23.131640 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 22:53:23.131766 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:53:23.133547 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 22:53:23.133738 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:53:23.134534 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 22:53:23.134687 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 22:53:23.142861 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 22:53:23.143840 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 22:53:23.144007 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:53:23.146764 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 22:53:23.150215 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 22:53:23.150376 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:53:23.151751 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 22:53:23.151877 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:53:23.160245 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 22:53:23.165887 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 22:53:23.190519 ignition[1086]: INFO : Ignition 2.22.0 Sep 12 22:53:23.192219 ignition[1086]: INFO : Stage: umount Sep 12 22:53:23.192219 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:53:23.192219 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 22:53:23.192088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 22:53:23.196303 ignition[1086]: INFO : umount: umount passed Sep 12 22:53:23.196303 ignition[1086]: INFO : Ignition finished successfully Sep 12 22:53:23.195953 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 22:53:23.196096 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 22:53:23.197734 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 22:53:23.197878 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 22:53:23.198853 systemd[1]: Stopped target network.target - Network. Sep 12 22:53:23.213794 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 22:53:23.213937 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 22:53:23.215787 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 22:53:23.215847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 22:53:23.216750 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 22:53:23.216806 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 22:53:23.217098 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 22:53:23.217144 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 22:53:23.221270 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 22:53:23.221324 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 22:53:23.221764 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 22:53:23.222206 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 22:53:23.234028 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 22:53:23.234242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 22:53:23.239685 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 22:53:23.240107 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 22:53:23.240173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:53:23.245498 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:53:23.245889 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 22:53:23.246035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 22:53:23.250057 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 22:53:23.250729 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 22:53:23.251393 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 22:53:23.251457 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:53:23.252978 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 22:53:23.257057 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 22:53:23.257158 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:53:23.257530 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:53:23.257788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:53:23.272451 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 22:53:23.272589 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 22:53:23.272973 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:53:23.277847 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 22:53:23.298122 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 22:53:23.298872 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:53:23.300144 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 22:53:23.300212 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 22:53:23.304092 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 22:53:23.304141 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:53:23.306738 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 22:53:23.306818 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:53:23.310446 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 22:53:23.310521 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 22:53:23.316296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 22:53:23.316358 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:53:23.321997 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 22:53:23.322064 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 22:53:23.322118 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:53:23.329198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 22:53:23.330595 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:53:23.333755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:53:23.333831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:53:23.338415 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 22:53:23.341857 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 22:53:23.350685 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 22:53:23.350840 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 22:53:23.396505 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 22:53:23.400585 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 22:53:23.434467 systemd[1]: Switching root. Sep 12 22:53:23.472472 systemd-journald[220]: Journal stopped Sep 12 22:53:25.485222 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 12 22:53:25.485315 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 22:53:25.485346 kernel: SELinux: policy capability open_perms=1 Sep 12 22:53:25.485361 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 22:53:25.485383 kernel: SELinux: policy capability always_check_network=0 Sep 12 22:53:25.485399 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 22:53:25.485414 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 22:53:25.485436 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 22:53:25.485451 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 22:53:25.485466 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 22:53:25.485484 kernel: audit: type=1403 audit(1757717604.400:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 22:53:25.485501 systemd[1]: Successfully loaded SELinux policy in 68.740ms. Sep 12 22:53:25.485533 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.278ms. Sep 12 22:53:25.485552 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:53:25.485569 systemd[1]: Detected virtualization kvm. Sep 12 22:53:25.485585 systemd[1]: Detected architecture x86-64. Sep 12 22:53:25.485601 systemd[1]: Detected first boot. Sep 12 22:53:25.485862 systemd[1]: Initializing machine ID from VM UUID. Sep 12 22:53:25.485880 zram_generator::config[1131]: No configuration found. Sep 12 22:53:25.485907 kernel: Guest personality initialized and is inactive Sep 12 22:53:25.485929 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 22:53:25.485944 kernel: Initialized host personality Sep 12 22:53:25.485965 kernel: NET: Registered PF_VSOCK protocol family Sep 12 22:53:25.485981 systemd[1]: Populated /etc with preset unit settings. Sep 12 22:53:25.485999 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 22:53:25.486016 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 22:53:25.486032 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 22:53:25.486056 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 22:53:25.486073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 22:53:25.486090 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 22:53:25.486106 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 22:53:25.486122 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 22:53:25.486138 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 22:53:25.486154 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 22:53:25.486170 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 22:53:25.486197 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 22:53:25.486219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:53:25.486236 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:53:25.486252 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 22:53:25.486269 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 22:53:25.486285 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 22:53:25.486302 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:53:25.486318 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 22:53:25.486337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:53:25.486354 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:53:25.486370 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 22:53:25.486387 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 22:53:25.486403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 22:53:25.486418 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 22:53:25.486434 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:53:25.486451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:53:25.486467 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:53:25.486483 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:53:25.486502 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 22:53:25.486518 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 22:53:25.486535 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 22:53:25.486550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:53:25.486568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:53:25.486585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:53:25.486600 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 22:53:25.486648 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 22:53:25.486665 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 22:53:25.486685 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 22:53:25.486701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:25.486717 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 22:53:25.486735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 22:53:25.486752 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 22:53:25.486769 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 22:53:25.486785 systemd[1]: Reached target machines.target - Containers. Sep 12 22:53:25.486801 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 22:53:25.486821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:53:25.486838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:53:25.486854 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 22:53:25.486870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:53:25.486886 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:53:25.486902 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:53:25.486918 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 22:53:25.486934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:53:25.486954 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 22:53:25.486973 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 22:53:25.486990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 22:53:25.487006 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 22:53:25.487022 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 22:53:25.487040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:53:25.487056 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:53:25.487073 kernel: fuse: init (API version 7.41) Sep 12 22:53:25.487088 kernel: loop: module loaded Sep 12 22:53:25.487112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:53:25.487129 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:53:25.487145 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 22:53:25.487162 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 22:53:25.487178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:53:25.487209 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 22:53:25.487225 systemd[1]: Stopped verity-setup.service. Sep 12 22:53:25.487242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:25.487258 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 22:53:25.487273 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 22:53:25.487288 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 22:53:25.487302 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 22:53:25.487321 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 22:53:25.487338 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 22:53:25.487396 systemd-journald[1202]: Collecting audit messages is disabled. Sep 12 22:53:25.487426 kernel: ACPI: bus type drm_connector registered Sep 12 22:53:25.487448 systemd-journald[1202]: Journal started Sep 12 22:53:25.487479 systemd-journald[1202]: Runtime Journal (/run/log/journal/7eb62e37354e44179b507dbe611197fa) is 6M, max 48.4M, 42.4M free. Sep 12 22:53:25.201740 systemd[1]: Queued start job for default target multi-user.target. Sep 12 22:53:25.223483 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 22:53:25.224782 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 22:53:25.492149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:53:25.497066 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:53:25.498804 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 22:53:25.500588 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 22:53:25.500964 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 22:53:25.502714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:53:25.503001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:53:25.504957 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:53:25.505248 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:53:25.506986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:53:25.507274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:53:25.509419 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 22:53:25.509942 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 22:53:25.511977 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:53:25.512246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:53:25.514142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:53:25.516121 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:53:25.518110 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 22:53:25.520231 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 22:53:25.536995 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:53:25.539821 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 22:53:25.542274 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 22:53:25.543500 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 22:53:25.543531 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:53:25.545647 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 22:53:25.554791 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 22:53:25.556238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:53:25.559444 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 22:53:25.561992 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 22:53:25.563354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:53:25.566507 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 22:53:25.567885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:53:25.575446 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:53:25.580441 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 22:53:25.589867 systemd-journald[1202]: Time spent on flushing to /var/log/journal/7eb62e37354e44179b507dbe611197fa is 19.491ms for 1065 entries. Sep 12 22:53:25.589867 systemd-journald[1202]: System Journal (/var/log/journal/7eb62e37354e44179b507dbe611197fa) is 8M, max 195.6M, 187.6M free. Sep 12 22:53:25.661559 systemd-journald[1202]: Received client request to flush runtime journal. Sep 12 22:53:25.661690 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 22:53:25.582837 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 22:53:25.585864 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 22:53:25.587271 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 22:53:25.609144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:53:25.618551 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 22:53:25.622310 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 22:53:25.626898 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 22:53:25.631836 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:53:25.663854 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 22:53:25.671667 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 22:53:25.679783 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 22:53:25.690260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:53:25.702509 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 22:53:25.707261 kernel: loop1: detected capacity change from 0 to 110984 Sep 12 22:53:25.727736 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Sep 12 22:53:25.728297 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Sep 12 22:53:25.734904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:53:25.743660 kernel: loop2: detected capacity change from 0 to 221472 Sep 12 22:53:25.858862 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 22:53:25.902640 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 22:53:25.917676 kernel: loop5: detected capacity change from 0 to 221472 Sep 12 22:53:25.931273 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 22:53:25.932284 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 12 22:53:25.939756 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 22:53:25.939777 systemd[1]: Reloading... Sep 12 22:53:26.198664 zram_generator::config[1300]: No configuration found. Sep 12 22:53:26.292391 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 22:53:26.487238 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 22:53:26.487867 systemd[1]: Reloading finished in 547 ms. Sep 12 22:53:26.525395 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 22:53:26.527942 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 22:53:26.550012 systemd[1]: Starting ensure-sysext.service... Sep 12 22:53:26.568657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:53:26.619720 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 22:53:26.620369 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 22:53:26.620856 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 22:53:26.621329 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 22:53:26.622786 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 22:53:26.623222 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 22:53:26.623335 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 22:53:26.629634 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 12 22:53:26.629654 systemd[1]: Reloading... Sep 12 22:53:26.634777 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:53:26.634901 systemd-tmpfiles[1338]: Skipping /boot Sep 12 22:53:26.652009 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:53:26.652024 systemd-tmpfiles[1338]: Skipping /boot Sep 12 22:53:26.750729 zram_generator::config[1368]: No configuration found. Sep 12 22:53:27.010768 systemd[1]: Reloading finished in 380 ms. Sep 12 22:53:27.041408 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 22:53:27.075781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:53:27.091596 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:53:27.102250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 22:53:27.106534 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 22:53:27.113376 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:53:27.118766 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:53:27.138327 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 22:53:27.149563 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:27.149961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:53:27.155856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:53:27.179177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:53:27.183682 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:53:27.185348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:53:27.185494 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:53:27.185635 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:27.196933 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 22:53:27.204783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:27.205033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:53:27.205275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:53:27.205411 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:53:27.205539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:27.210675 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 22:53:27.223886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:53:27.225330 systemd-udevd[1408]: Using default interface naming scheme 'v255'. Sep 12 22:53:27.225393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:53:27.229951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:53:27.230404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:53:27.233170 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:53:27.233493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:53:27.246270 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:27.246831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:53:27.251160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:53:27.295778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:53:27.299919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:53:27.307998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:53:27.310446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:53:27.310710 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:53:27.310858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:53:27.313318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 22:53:27.317223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:53:27.317950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:53:27.321237 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:53:27.321529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:53:27.323498 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 22:53:27.326275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:53:27.326528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:53:27.329359 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:53:27.330283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:53:27.337003 systemd[1]: Finished ensure-sysext.service. Sep 12 22:53:27.346407 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:53:27.346508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:53:27.349299 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 22:53:27.367794 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 22:53:27.369880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:53:27.375339 augenrules[1453]: No rules Sep 12 22:53:27.377711 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:53:27.378020 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:53:27.389534 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:53:27.420637 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 22:53:27.441833 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 22:53:27.452381 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 22:53:27.483484 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 22:53:27.586233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 22:53:27.752651 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 22:53:27.764526 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 22:53:27.806638 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 22:53:27.810532 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 22:53:27.812524 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 22:53:27.825917 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 22:53:27.826275 kernel: ACPI: button: Power Button [PWRF] Sep 12 22:53:27.826294 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 22:53:27.828677 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 22:53:27.843311 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 22:53:27.962381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:53:27.989691 systemd-resolved[1407]: Positive Trust Anchors: Sep 12 22:53:27.989715 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:53:27.989761 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:53:27.990670 systemd-networkd[1471]: lo: Link UP Sep 12 22:53:27.990676 systemd-networkd[1471]: lo: Gained carrier Sep 12 22:53:27.993715 systemd-networkd[1471]: Enumeration completed Sep 12 22:53:27.994014 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:53:27.994569 systemd-networkd[1471]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:53:27.994660 systemd-networkd[1471]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:53:27.995495 systemd-networkd[1471]: eth0: Link UP Sep 12 22:53:27.996555 systemd-networkd[1471]: eth0: Gained carrier Sep 12 22:53:27.996574 systemd-networkd[1471]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:53:27.996646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:53:27.996767 systemd-resolved[1407]: Defaulting to hostname 'linux'. Sep 12 22:53:27.996936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:53:28.001801 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 22:53:28.009723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 22:53:28.010818 systemd-networkd[1471]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 22:53:28.014031 systemd-timesyncd[1448]: Network configuration changed, trying to establish connection. Sep 12 22:53:28.014478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:53:29.237644 kernel: kvm_amd: TSC scaling supported Sep 12 22:53:29.237752 kernel: kvm_amd: Nested Virtualization enabled Sep 12 22:53:29.237789 kernel: kvm_amd: Nested Paging enabled Sep 12 22:53:29.237810 kernel: kvm_amd: LBR virtualization supported Sep 12 22:53:29.237082 systemd-timesyncd[1448]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 22:53:29.237152 systemd-timesyncd[1448]: Initial clock synchronization to Fri 2025-09-12 22:53:29.236925 UTC. Sep 12 22:53:29.237213 systemd-resolved[1407]: Clock change detected. Flushing caches. Sep 12 22:53:29.237942 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:53:29.312707 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 22:53:29.312887 kernel: kvm_amd: Virtual GIF supported Sep 12 22:53:29.327071 systemd[1]: Reached target network.target - Network. Sep 12 22:53:29.328554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:53:29.344857 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 22:53:29.360299 kernel: EDAC MC: Ver: 3.0.0 Sep 12 22:53:29.401424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:53:29.423256 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:53:29.424661 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 22:53:29.425988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 22:53:29.427517 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 22:53:29.428988 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 22:53:29.430239 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 22:53:29.431532 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 22:53:29.432821 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 22:53:29.432872 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:53:29.433823 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:53:29.436007 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 22:53:29.438986 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 22:53:29.442938 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 22:53:29.444432 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 22:53:29.445822 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 22:53:29.450412 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 22:53:29.452043 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 22:53:29.473101 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 22:53:29.475194 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:53:29.476213 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:53:29.477566 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:53:29.477613 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:53:29.478938 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 22:53:29.481678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 22:53:29.485100 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 22:53:29.487822 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 22:53:29.491956 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 22:53:29.494523 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 22:53:29.497972 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 22:53:29.503210 jq[1537]: false Sep 12 22:53:29.519033 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 22:53:29.537981 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing passwd entry cache Sep 12 22:53:29.525713 oslogin_cache_refresh[1539]: Refreshing passwd entry cache Sep 12 22:53:29.543486 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 22:53:29.546566 oslogin_cache_refresh[1539]: Failure getting users, quitting Sep 12 22:53:29.549857 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting users, quitting Sep 12 22:53:29.549857 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:53:29.549857 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Refreshing group entry cache Sep 12 22:53:29.546597 oslogin_cache_refresh[1539]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:53:29.546694 oslogin_cache_refresh[1539]: Refreshing group entry cache Sep 12 22:53:29.550348 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 22:53:29.554343 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 22:53:29.558597 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Failure getting groups, quitting Sep 12 22:53:29.558597 google_oslogin_nss_cache[1539]: oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:53:29.556980 oslogin_cache_refresh[1539]: Failure getting groups, quitting Sep 12 22:53:29.560955 extend-filesystems[1538]: Found /dev/vda6 Sep 12 22:53:29.556998 oslogin_cache_refresh[1539]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:53:29.567587 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 22:53:29.571641 extend-filesystems[1538]: Found /dev/vda9 Sep 12 22:53:29.574588 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 22:53:29.576148 extend-filesystems[1538]: Checking size of /dev/vda9 Sep 12 22:53:29.575894 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 22:53:29.578411 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 22:53:29.598631 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 22:53:29.608530 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 22:53:29.610921 extend-filesystems[1538]: Resized partition /dev/vda9 Sep 12 22:53:29.611588 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 22:53:29.611946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 22:53:29.612379 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 22:53:29.612695 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 22:53:29.614689 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 22:53:29.615404 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 22:53:29.624622 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 22:53:29.626415 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 22:53:29.630026 extend-filesystems[1565]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 22:53:29.635596 update_engine[1553]: I20250912 22:53:29.628944 1553 main.cc:92] Flatcar Update Engine starting Sep 12 22:53:29.635982 jq[1560]: true Sep 12 22:53:29.658560 (ntainerd)[1570]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 22:53:29.664428 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 22:53:29.669159 jq[1571]: true Sep 12 22:53:29.709634 tar[1566]: linux-amd64/helm Sep 12 22:53:29.734055 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 22:53:29.734589 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 22:53:29.734935 systemd-logind[1550]: New seat seat0. Sep 12 22:53:29.738799 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 22:53:29.780250 dbus-daemon[1535]: [system] SELinux support is enabled Sep 12 22:53:29.795363 dbus-daemon[1535]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 22:53:29.801238 update_engine[1553]: I20250912 22:53:29.788044 1553 update_check_scheduler.cc:74] Next update check in 7m26s Sep 12 22:53:29.781806 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 22:53:29.793207 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 22:53:29.793240 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 22:53:29.795646 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 22:53:29.795666 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 22:53:29.799179 systemd[1]: Started update-engine.service - Update Engine. Sep 12 22:53:29.805192 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 22:53:29.805955 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 22:53:29.834325 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 22:53:29.857038 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 22:53:29.886569 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 22:53:29.907187 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 22:53:29.912495 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 22:53:29.912836 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 22:53:29.921628 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 22:53:29.924915 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 22:53:29.924915 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 22:53:29.924915 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 22:53:29.943719 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Sep 12 22:53:29.929952 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 22:53:29.946682 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Sep 12 22:53:29.930339 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 22:53:29.936240 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 22:53:29.948030 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 22:53:29.962536 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 22:53:29.970015 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 22:53:29.974838 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 22:53:29.980614 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 22:53:30.188958 containerd[1570]: time="2025-09-12T22:53:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 22:53:30.190043 containerd[1570]: time="2025-09-12T22:53:30.189987344Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 22:53:30.213298 containerd[1570]: time="2025-09-12T22:53:30.213021622Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.996µs" Sep 12 22:53:30.213298 containerd[1570]: time="2025-09-12T22:53:30.213096172Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 22:53:30.213298 containerd[1570]: time="2025-09-12T22:53:30.213127160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 22:53:30.213520 containerd[1570]: time="2025-09-12T22:53:30.213503706Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 22:53:30.213553 containerd[1570]: time="2025-09-12T22:53:30.213527410Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 22:53:30.213611 containerd[1570]: time="2025-09-12T22:53:30.213583546Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:53:30.213751 containerd[1570]: time="2025-09-12T22:53:30.213702729Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:53:30.213751 containerd[1570]: time="2025-09-12T22:53:30.213736032Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:53:30.214344 containerd[1570]: time="2025-09-12T22:53:30.214306972Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:53:30.214344 containerd[1570]: time="2025-09-12T22:53:30.214329364Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:53:30.214417 containerd[1570]: time="2025-09-12T22:53:30.214344823Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:53:30.214417 containerd[1570]: time="2025-09-12T22:53:30.214357126Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 22:53:30.214553 containerd[1570]: time="2025-09-12T22:53:30.214521064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 22:53:30.214987 containerd[1570]: time="2025-09-12T22:53:30.214928468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:53:30.215035 containerd[1570]: time="2025-09-12T22:53:30.214985645Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:53:30.215035 containerd[1570]: time="2025-09-12T22:53:30.215007426Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 22:53:30.215113 containerd[1570]: time="2025-09-12T22:53:30.215083749Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 22:53:30.216048 containerd[1570]: time="2025-09-12T22:53:30.215716496Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 22:53:30.216048 containerd[1570]: time="2025-09-12T22:53:30.215836561Z" level=info msg="metadata content store policy set" policy=shared Sep 12 22:53:30.520031 containerd[1570]: time="2025-09-12T22:53:30.518008631Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 22:53:30.520031 containerd[1570]: time="2025-09-12T22:53:30.519905789Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520534288Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520567871Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520591044Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520607385Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520628966Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520651808Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520670794Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520717281Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520746165Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.520774028Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.521097985Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.521142018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.521169900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 22:53:30.522805 containerd[1570]: time="2025-09-12T22:53:30.521190479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521210446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521224773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521247295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521302038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521320382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521338105Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521361499Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521493557Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521543280Z" level=info msg="Start snapshots syncer" Sep 12 22:53:30.523628 containerd[1570]: time="2025-09-12T22:53:30.521581021Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 22:53:30.523924 containerd[1570]: time="2025-09-12T22:53:30.522048908Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 22:53:30.523924 containerd[1570]: time="2025-09-12T22:53:30.522132696Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522286544Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522461081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522504202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522518559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522530191Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522542905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522554787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522568523Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522602897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522631501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522650196Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 22:53:30.524130 containerd[1570]: time="2025-09-12T22:53:30.522700891Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:53:30.524905 containerd[1570]: time="2025-09-12T22:53:30.522722171Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:53:30.524905 containerd[1570]: time="2025-09-12T22:53:30.524756316Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525046500Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525074623Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525095031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525110139Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525158330Z" level=info msg="runtime interface created" Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525174390Z" level=info msg="created NRI interface" Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525187625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525207241Z" level=info msg="Connect containerd service" Sep 12 22:53:30.525767 containerd[1570]: time="2025-09-12T22:53:30.525257065Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 22:53:30.529014 containerd[1570]: time="2025-09-12T22:53:30.528923040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:53:30.534219 systemd-networkd[1471]: eth0: Gained IPv6LL Sep 12 22:53:30.543481 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 22:53:30.553900 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 22:53:30.562624 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 22:53:30.567412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:53:30.584726 tar[1566]: linux-amd64/LICENSE Sep 12 22:53:30.584726 tar[1566]: linux-amd64/README.md Sep 12 22:53:30.579255 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 22:53:30.662310 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 22:53:30.682597 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 22:53:30.684810 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 22:53:30.685103 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 22:53:30.689673 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 22:53:30.787593 containerd[1570]: time="2025-09-12T22:53:30.787465339Z" level=info msg="Start subscribing containerd event" Sep 12 22:53:30.787798 containerd[1570]: time="2025-09-12T22:53:30.787762877Z" level=info msg="Start recovering state" Sep 12 22:53:30.787982 containerd[1570]: time="2025-09-12T22:53:30.787829161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 22:53:30.788020 containerd[1570]: time="2025-09-12T22:53:30.787961149Z" level=info msg="Start event monitor" Sep 12 22:53:30.788064 containerd[1570]: time="2025-09-12T22:53:30.788049875Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 22:53:30.788087 containerd[1570]: time="2025-09-12T22:53:30.788050677Z" level=info msg="Start cni network conf syncer for default" Sep 12 22:53:30.788107 containerd[1570]: time="2025-09-12T22:53:30.788092986Z" level=info msg="Start streaming server" Sep 12 22:53:30.788127 containerd[1570]: time="2025-09-12T22:53:30.788113034Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 22:53:30.788146 containerd[1570]: time="2025-09-12T22:53:30.788124024Z" level=info msg="runtime interface starting up..." Sep 12 22:53:30.788146 containerd[1570]: time="2025-09-12T22:53:30.788136438Z" level=info msg="starting plugins..." Sep 12 22:53:30.788186 containerd[1570]: time="2025-09-12T22:53:30.788173798Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 22:53:30.788703 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 22:53:30.788927 containerd[1570]: time="2025-09-12T22:53:30.788898437Z" level=info msg="containerd successfully booted in 0.602828s" Sep 12 22:53:31.304674 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 22:53:31.309172 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:50378.service - OpenSSH per-connection server daemon (10.0.0.1:50378). Sep 12 22:53:31.409218 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 50378 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:31.411355 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:31.418625 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 22:53:31.421340 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 22:53:31.430799 systemd-logind[1550]: New session 1 of user core. Sep 12 22:53:31.453156 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 22:53:31.462663 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 22:53:31.502407 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 22:53:31.507974 systemd-logind[1550]: New session c1 of user core. Sep 12 22:53:31.689745 systemd[1670]: Queued start job for default target default.target. Sep 12 22:53:31.699735 systemd[1670]: Created slice app.slice - User Application Slice. Sep 12 22:53:31.699777 systemd[1670]: Reached target paths.target - Paths. Sep 12 22:53:31.699854 systemd[1670]: Reached target timers.target - Timers. Sep 12 22:53:31.701793 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 22:53:31.717553 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 22:53:31.717778 systemd[1670]: Reached target sockets.target - Sockets. Sep 12 22:53:31.717841 systemd[1670]: Reached target basic.target - Basic System. Sep 12 22:53:31.717899 systemd[1670]: Reached target default.target - Main User Target. Sep 12 22:53:31.717955 systemd[1670]: Startup finished in 198ms. Sep 12 22:53:31.718206 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 22:53:31.734661 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 22:53:31.808973 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:50380.service - OpenSSH per-connection server daemon (10.0.0.1:50380). Sep 12 22:53:31.883053 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 50380 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:31.885550 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:31.891403 systemd-logind[1550]: New session 2 of user core. Sep 12 22:53:31.902559 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 22:53:31.962238 sshd[1684]: Connection closed by 10.0.0.1 port 50380 Sep 12 22:53:31.962820 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 12 22:53:31.974771 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:50380.service: Deactivated successfully. Sep 12 22:53:31.976532 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 22:53:31.977365 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Sep 12 22:53:31.987284 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:50388.service - OpenSSH per-connection server daemon (10.0.0.1:50388). Sep 12 22:53:31.993005 systemd-logind[1550]: Removed session 2. Sep 12 22:53:32.230003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:53:32.232129 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 22:53:32.233875 systemd[1]: Startup finished in 4.267s (kernel) + 9.541s (initrd) + 6.680s (userspace) = 20.489s. Sep 12 22:53:32.237837 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:53:32.328871 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 50388 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:32.330581 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:32.335672 systemd-logind[1550]: New session 3 of user core. Sep 12 22:53:32.344534 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 22:53:32.401738 sshd[1703]: Connection closed by 10.0.0.1 port 50388 Sep 12 22:53:32.402140 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 12 22:53:32.407420 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:50388.service: Deactivated successfully. Sep 12 22:53:32.409878 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 22:53:32.410890 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Sep 12 22:53:32.412377 systemd-logind[1550]: Removed session 3. Sep 12 22:53:32.987816 kubelet[1698]: E0912 22:53:32.987715 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:53:32.992742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:53:32.993092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:53:32.993777 systemd[1]: kubelet.service: Consumed 1.562s CPU time, 266.4M memory peak. Sep 12 22:53:42.420545 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:55894.service - OpenSSH per-connection server daemon (10.0.0.1:55894). Sep 12 22:53:42.489995 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 55894 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:42.492412 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:42.501284 systemd-logind[1550]: New session 4 of user core. Sep 12 22:53:42.519683 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 22:53:42.578143 sshd[1719]: Connection closed by 10.0.0.1 port 55894 Sep 12 22:53:42.578740 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 12 22:53:42.591306 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:55894.service: Deactivated successfully. Sep 12 22:53:42.593349 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 22:53:42.594046 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Sep 12 22:53:42.596726 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:55898.service - OpenSSH per-connection server daemon (10.0.0.1:55898). Sep 12 22:53:42.597559 systemd-logind[1550]: Removed session 4. Sep 12 22:53:42.653739 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 55898 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:42.655534 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:42.661108 systemd-logind[1550]: New session 5 of user core. Sep 12 22:53:42.669495 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 22:53:42.720409 sshd[1729]: Connection closed by 10.0.0.1 port 55898 Sep 12 22:53:42.720554 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Sep 12 22:53:42.730294 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:55898.service: Deactivated successfully. Sep 12 22:53:42.732476 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 22:53:42.733396 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Sep 12 22:53:42.736713 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:55906.service - OpenSSH per-connection server daemon (10.0.0.1:55906). Sep 12 22:53:42.737495 systemd-logind[1550]: Removed session 5. Sep 12 22:53:42.802019 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 55906 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:42.803930 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:42.808613 systemd-logind[1550]: New session 6 of user core. Sep 12 22:53:42.819431 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 22:53:42.875434 sshd[1738]: Connection closed by 10.0.0.1 port 55906 Sep 12 22:53:42.875886 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 12 22:53:42.886581 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:55906.service: Deactivated successfully. Sep 12 22:53:42.889072 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 22:53:42.890053 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Sep 12 22:53:42.893466 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:55908.service - OpenSSH per-connection server daemon (10.0.0.1:55908). Sep 12 22:53:42.894238 systemd-logind[1550]: Removed session 6. Sep 12 22:53:42.960652 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 55908 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:42.962638 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:42.967740 systemd-logind[1550]: New session 7 of user core. Sep 12 22:53:42.977528 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 22:53:43.017641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 22:53:43.019461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:53:43.046790 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 22:53:43.047221 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:53:43.071440 sudo[1749]: pam_unix(sudo:session): session closed for user root Sep 12 22:53:43.073607 sshd[1747]: Connection closed by 10.0.0.1 port 55908 Sep 12 22:53:43.074097 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Sep 12 22:53:43.084093 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:55908.service: Deactivated successfully. Sep 12 22:53:43.085987 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 22:53:43.086779 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Sep 12 22:53:43.089979 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:55938.service - OpenSSH per-connection server daemon (10.0.0.1:55938). Sep 12 22:53:43.091133 systemd-logind[1550]: Removed session 7. Sep 12 22:53:43.147832 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 55938 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:43.149783 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:43.157234 systemd-logind[1550]: New session 8 of user core. Sep 12 22:53:43.171622 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 22:53:43.229868 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 22:53:43.230294 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:53:43.327176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:53:43.346939 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:53:43.567610 kubelet[1770]: E0912 22:53:43.567469 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:53:43.574700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:53:43.574913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:53:43.575332 systemd[1]: kubelet.service: Consumed 316ms CPU time, 111.3M memory peak. Sep 12 22:53:43.585115 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 12 22:53:43.593361 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 22:53:43.593675 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:53:43.605869 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:53:43.653056 augenrules[1798]: No rules Sep 12 22:53:43.655232 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:53:43.655575 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:53:43.656991 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 12 22:53:43.658992 sshd[1761]: Connection closed by 10.0.0.1 port 55938 Sep 12 22:53:43.659478 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Sep 12 22:53:43.669819 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:55938.service: Deactivated successfully. Sep 12 22:53:43.672114 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 22:53:43.673194 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Sep 12 22:53:43.676512 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:55952.service - OpenSSH per-connection server daemon (10.0.0.1:55952). Sep 12 22:53:43.677237 systemd-logind[1550]: Removed session 8. Sep 12 22:53:43.754298 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 55952 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:53:43.755965 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:53:43.761326 systemd-logind[1550]: New session 9 of user core. Sep 12 22:53:43.771535 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 22:53:43.829078 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 22:53:43.829533 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:53:44.766652 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 22:53:44.792915 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 22:53:45.598935 dockerd[1831]: time="2025-09-12T22:53:45.598845023Z" level=info msg="Starting up" Sep 12 22:53:45.599954 dockerd[1831]: time="2025-09-12T22:53:45.599928695Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 22:53:45.620037 dockerd[1831]: time="2025-09-12T22:53:45.619984266Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 22:53:45.982312 dockerd[1831]: time="2025-09-12T22:53:45.982062001Z" level=info msg="Loading containers: start." Sep 12 22:53:46.090341 kernel: Initializing XFRM netlink socket Sep 12 22:53:46.759147 systemd-networkd[1471]: docker0: Link UP Sep 12 22:53:46.767717 dockerd[1831]: time="2025-09-12T22:53:46.767640872Z" level=info msg="Loading containers: done." Sep 12 22:53:46.833246 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck924084129-merged.mount: Deactivated successfully. Sep 12 22:53:46.840383 dockerd[1831]: time="2025-09-12T22:53:46.840250221Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 22:53:46.840574 dockerd[1831]: time="2025-09-12T22:53:46.840464322Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 22:53:46.840676 dockerd[1831]: time="2025-09-12T22:53:46.840647375Z" level=info msg="Initializing buildkit" Sep 12 22:53:46.884144 dockerd[1831]: time="2025-09-12T22:53:46.884069708Z" level=info msg="Completed buildkit initialization" Sep 12 22:53:46.892478 dockerd[1831]: time="2025-09-12T22:53:46.892390833Z" level=info msg="Daemon has completed initialization" Sep 12 22:53:46.892673 dockerd[1831]: time="2025-09-12T22:53:46.892553528Z" level=info msg="API listen on /run/docker.sock" Sep 12 22:53:46.892769 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 22:53:47.860734 containerd[1570]: time="2025-09-12T22:53:47.860586576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 22:53:48.518337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070314147.mount: Deactivated successfully. Sep 12 22:53:50.222491 containerd[1570]: time="2025-09-12T22:53:50.222398546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:50.223167 containerd[1570]: time="2025-09-12T22:53:50.223102116Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 22:53:50.224440 containerd[1570]: time="2025-09-12T22:53:50.224395401Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:50.227981 containerd[1570]: time="2025-09-12T22:53:50.227934979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:50.229559 containerd[1570]: time="2025-09-12T22:53:50.229521835Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.368806608s" Sep 12 22:53:50.229618 containerd[1570]: time="2025-09-12T22:53:50.229565978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 22:53:50.231412 containerd[1570]: time="2025-09-12T22:53:50.231120002Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 22:53:52.416548 containerd[1570]: time="2025-09-12T22:53:52.416454745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:52.417735 containerd[1570]: time="2025-09-12T22:53:52.417632263Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 22:53:52.419830 containerd[1570]: time="2025-09-12T22:53:52.419767688Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:52.424585 containerd[1570]: time="2025-09-12T22:53:52.424528125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:52.425646 containerd[1570]: time="2025-09-12T22:53:52.425600145Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 2.194435019s" Sep 12 22:53:52.425646 containerd[1570]: time="2025-09-12T22:53:52.425636774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 22:53:52.426314 containerd[1570]: time="2025-09-12T22:53:52.426246538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 22:53:53.768148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 22:53:53.770385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:53:54.138543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:53:54.153795 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:53:54.529101 kubelet[2120]: E0912 22:53:54.528916 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:53:54.534118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:53:54.534425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:53:54.535098 systemd[1]: kubelet.service: Consumed 662ms CPU time, 110.1M memory peak. Sep 12 22:53:55.258854 containerd[1570]: time="2025-09-12T22:53:55.258778697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:55.259770 containerd[1570]: time="2025-09-12T22:53:55.259725552Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 22:53:55.261553 containerd[1570]: time="2025-09-12T22:53:55.261459143Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:55.264554 containerd[1570]: time="2025-09-12T22:53:55.264494846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:53:55.265991 containerd[1570]: time="2025-09-12T22:53:55.265924157Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.839620812s" Sep 12 22:53:55.265991 containerd[1570]: time="2025-09-12T22:53:55.265986143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 22:53:55.266862 containerd[1570]: time="2025-09-12T22:53:55.266623538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 22:53:59.383466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571851553.mount: Deactivated successfully. Sep 12 22:54:00.822352 containerd[1570]: time="2025-09-12T22:54:00.822242347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:00.890797 containerd[1570]: time="2025-09-12T22:54:00.890668911Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 22:54:00.968787 containerd[1570]: time="2025-09-12T22:54:00.968675381Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:01.064637 containerd[1570]: time="2025-09-12T22:54:01.064544199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:01.065287 containerd[1570]: time="2025-09-12T22:54:01.065215548Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 5.798550893s" Sep 12 22:54:01.065342 containerd[1570]: time="2025-09-12T22:54:01.065288585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 22:54:01.065937 containerd[1570]: time="2025-09-12T22:54:01.065911063Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 22:54:03.158059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292277216.mount: Deactivated successfully. Sep 12 22:54:04.767711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 22:54:04.769880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:05.055796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:05.079792 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:54:05.135039 containerd[1570]: time="2025-09-12T22:54:05.134946797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:05.136071 containerd[1570]: time="2025-09-12T22:54:05.135977948Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 22:54:05.138356 containerd[1570]: time="2025-09-12T22:54:05.138289218Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:05.143033 containerd[1570]: time="2025-09-12T22:54:05.142382874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:05.144485 containerd[1570]: time="2025-09-12T22:54:05.144437803Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.078490954s" Sep 12 22:54:05.144554 containerd[1570]: time="2025-09-12T22:54:05.144488660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 22:54:05.145704 containerd[1570]: time="2025-09-12T22:54:05.145668165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 22:54:05.148314 kubelet[2201]: E0912 22:54:05.147988 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:54:05.153346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:54:05.153585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:54:05.154015 systemd[1]: kubelet.service: Consumed 301ms CPU time, 110.6M memory peak. Sep 12 22:54:05.759282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983820377.mount: Deactivated successfully. Sep 12 22:54:05.768944 containerd[1570]: time="2025-09-12T22:54:05.768845025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:54:05.770101 containerd[1570]: time="2025-09-12T22:54:05.770072321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 22:54:05.773300 containerd[1570]: time="2025-09-12T22:54:05.772248652Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:54:05.775357 containerd[1570]: time="2025-09-12T22:54:05.775241164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:54:05.776413 containerd[1570]: time="2025-09-12T22:54:05.776360454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 630.560748ms" Sep 12 22:54:05.776413 containerd[1570]: time="2025-09-12T22:54:05.776403055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 22:54:05.777287 containerd[1570]: time="2025-09-12T22:54:05.777233694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 22:54:06.499855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882972321.mount: Deactivated successfully. Sep 12 22:54:09.468088 containerd[1570]: time="2025-09-12T22:54:09.467942544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:09.469115 containerd[1570]: time="2025-09-12T22:54:09.469063006Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 22:54:09.471031 containerd[1570]: time="2025-09-12T22:54:09.470961971Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:09.474737 containerd[1570]: time="2025-09-12T22:54:09.474697622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:09.476307 containerd[1570]: time="2025-09-12T22:54:09.476175114Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.698881748s" Sep 12 22:54:09.476307 containerd[1570]: time="2025-09-12T22:54:09.476235309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 22:54:13.030318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:13.030569 systemd[1]: kubelet.service: Consumed 301ms CPU time, 110.6M memory peak. Sep 12 22:54:13.034069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:13.076711 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-9.scope)... Sep 12 22:54:13.076760 systemd[1]: Reloading... Sep 12 22:54:13.345321 zram_generator::config[2342]: No configuration found. Sep 12 22:54:14.730782 systemd[1]: Reloading finished in 1653 ms. Sep 12 22:54:14.766110 update_engine[1553]: I20250912 22:54:14.765997 1553 update_attempter.cc:509] Updating boot flags... Sep 12 22:54:15.449124 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 22:54:15.449325 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 22:54:15.449725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:15.449792 systemd[1]: kubelet.service: Consumed 206ms CPU time, 98.3M memory peak. Sep 12 22:54:15.452317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:16.080891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:16.106967 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:54:16.216769 kubelet[2407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:54:16.216769 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 22:54:16.216769 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:54:16.217396 kubelet[2407]: I0912 22:54:16.216860 2407 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:54:16.765136 kubelet[2407]: I0912 22:54:16.765075 2407 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 22:54:16.765136 kubelet[2407]: I0912 22:54:16.765116 2407 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:54:16.765469 kubelet[2407]: I0912 22:54:16.765445 2407 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 22:54:16.790021 kubelet[2407]: E0912 22:54:16.789946 2407 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:16.791386 kubelet[2407]: I0912 22:54:16.791329 2407 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:54:16.800657 kubelet[2407]: I0912 22:54:16.800619 2407 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:54:16.807914 kubelet[2407]: I0912 22:54:16.807846 2407 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:54:16.808160 kubelet[2407]: I0912 22:54:16.807971 2407 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 22:54:16.808160 kubelet[2407]: I0912 22:54:16.808109 2407 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:54:16.808351 kubelet[2407]: I0912 22:54:16.808136 2407 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:54:16.808351 kubelet[2407]: I0912 22:54:16.808351 2407 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:54:16.808642 kubelet[2407]: I0912 22:54:16.808361 2407 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 22:54:16.808642 kubelet[2407]: I0912 22:54:16.808498 2407 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:54:16.810761 kubelet[2407]: I0912 22:54:16.810723 2407 kubelet.go:408] "Attempting to sync node with API server" Sep 12 22:54:16.810761 kubelet[2407]: I0912 22:54:16.810751 2407 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:54:16.810862 kubelet[2407]: I0912 22:54:16.810792 2407 kubelet.go:314] "Adding apiserver pod source" Sep 12 22:54:16.810862 kubelet[2407]: I0912 22:54:16.810816 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:54:16.816014 kubelet[2407]: I0912 22:54:16.815952 2407 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:54:16.816530 kubelet[2407]: I0912 22:54:16.816485 2407 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:54:16.817045 kubelet[2407]: W0912 22:54:16.816939 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:16.817100 kubelet[2407]: E0912 22:54:16.817057 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:16.818434 kubelet[2407]: W0912 22:54:16.818364 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:16.818531 kubelet[2407]: E0912 22:54:16.818432 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:16.820259 kubelet[2407]: W0912 22:54:16.820196 2407 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 22:54:16.823674 kubelet[2407]: I0912 22:54:16.823246 2407 server.go:1274] "Started kubelet" Sep 12 22:54:16.839339 kubelet[2407]: I0912 22:54:16.839214 2407 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:54:16.839839 kubelet[2407]: I0912 22:54:16.823325 2407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:54:16.841499 kubelet[2407]: I0912 22:54:16.840404 2407 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:54:16.841499 kubelet[2407]: I0912 22:54:16.840730 2407 server.go:449] "Adding debug handlers to kubelet server" Sep 12 22:54:16.842518 kubelet[2407]: I0912 22:54:16.842475 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:54:16.842899 kubelet[2407]: E0912 22:54:16.841562 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aae56660e352 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 22:54:16.82319445 +0000 UTC m=+0.707586006,LastTimestamp:2025-09-12 22:54:16.82319445 +0000 UTC m=+0.707586006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 22:54:16.843216 kubelet[2407]: I0912 22:54:16.843123 2407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:54:16.843518 kubelet[2407]: I0912 22:54:16.843498 2407 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 22:54:16.844675 kubelet[2407]: E0912 22:54:16.844195 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:54:16.845641 kubelet[2407]: I0912 22:54:16.844871 2407 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 22:54:16.845641 kubelet[2407]: I0912 22:54:16.844942 2407 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:54:16.845641 kubelet[2407]: E0912 22:54:16.845159 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Sep 12 22:54:16.845641 kubelet[2407]: W0912 22:54:16.845518 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:16.845835 kubelet[2407]: I0912 22:54:16.845796 2407 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:54:16.845929 kubelet[2407]: I0912 22:54:16.845890 2407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:54:16.845929 kubelet[2407]: E0912 22:54:16.845933 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:16.847382 kubelet[2407]: E0912 22:54:16.847354 2407 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:54:16.847488 kubelet[2407]: I0912 22:54:16.847382 2407 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:54:16.867484 kubelet[2407]: I0912 22:54:16.867446 2407 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 22:54:16.867484 kubelet[2407]: I0912 22:54:16.867467 2407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 22:54:16.867484 kubelet[2407]: I0912 22:54:16.867490 2407 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:54:16.871687 kubelet[2407]: I0912 22:54:16.871609 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:54:16.873324 kubelet[2407]: I0912 22:54:16.873247 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:54:16.873324 kubelet[2407]: I0912 22:54:16.873302 2407 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 22:54:16.873432 kubelet[2407]: I0912 22:54:16.873329 2407 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 22:54:16.873432 kubelet[2407]: E0912 22:54:16.873383 2407 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:54:16.883090 kubelet[2407]: W0912 22:54:16.883034 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:16.883228 kubelet[2407]: E0912 22:54:16.883098 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:16.944706 kubelet[2407]: E0912 22:54:16.944606 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:54:16.974142 kubelet[2407]: E0912 22:54:16.974059 2407 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 22:54:17.045659 kubelet[2407]: E0912 22:54:17.045370 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:54:17.046204 kubelet[2407]: E0912 22:54:17.045937 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Sep 12 22:54:17.146429 kubelet[2407]: E0912 22:54:17.146353 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:54:17.174713 kubelet[2407]: E0912 22:54:17.174625 2407 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 22:54:17.184202 kubelet[2407]: I0912 22:54:17.184087 2407 policy_none.go:49] "None policy: Start" Sep 12 22:54:17.186080 kubelet[2407]: I0912 22:54:17.186037 2407 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 22:54:17.186292 kubelet[2407]: I0912 22:54:17.186103 2407 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:54:17.199486 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 22:54:17.227559 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 22:54:17.232961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 22:54:17.247586 kubelet[2407]: E0912 22:54:17.247495 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:54:17.256287 kubelet[2407]: I0912 22:54:17.256071 2407 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:54:17.256516 kubelet[2407]: I0912 22:54:17.256471 2407 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:54:17.256516 kubelet[2407]: I0912 22:54:17.256499 2407 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:54:17.256991 kubelet[2407]: I0912 22:54:17.256895 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:54:17.259959 kubelet[2407]: E0912 22:54:17.259903 2407 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 22:54:17.360001 kubelet[2407]: I0912 22:54:17.359859 2407 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:54:17.360537 kubelet[2407]: E0912 22:54:17.360472 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:54:17.447505 kubelet[2407]: E0912 22:54:17.447398 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Sep 12 22:54:17.565787 kubelet[2407]: I0912 22:54:17.565736 2407 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:54:17.566467 kubelet[2407]: E0912 22:54:17.566397 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:54:17.588456 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 22:54:17.618713 systemd[1]: Created slice kubepods-burstable-pod40501deced19feea3ab27d21ffbff400.slice - libcontainer container kubepods-burstable-pod40501deced19feea3ab27d21ffbff400.slice. Sep 12 22:54:17.623023 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 22:54:17.649591 kubelet[2407]: I0912 22:54:17.649491 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40501deced19feea3ab27d21ffbff400-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40501deced19feea3ab27d21ffbff400\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:54:17.649591 kubelet[2407]: I0912 22:54:17.649556 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:17.649591 kubelet[2407]: I0912 22:54:17.649598 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:17.649873 kubelet[2407]: I0912 22:54:17.649630 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:17.649873 kubelet[2407]: I0912 22:54:17.649653 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:17.649873 kubelet[2407]: I0912 22:54:17.649680 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:54:17.649873 kubelet[2407]: I0912 22:54:17.649704 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40501deced19feea3ab27d21ffbff400-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40501deced19feea3ab27d21ffbff400\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:54:17.649873 kubelet[2407]: I0912 22:54:17.649732 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40501deced19feea3ab27d21ffbff400-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40501deced19feea3ab27d21ffbff400\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:54:17.650061 kubelet[2407]: I0912 22:54:17.649753 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:17.674633 kubelet[2407]: W0912 22:54:17.674465 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:17.674633 kubelet[2407]: E0912 22:54:17.674585 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:17.915529 kubelet[2407]: E0912 22:54:17.915320 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:17.917322 containerd[1570]: time="2025-09-12T22:54:17.917223197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 22:54:17.923757 kubelet[2407]: E0912 22:54:17.923140 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:17.924688 containerd[1570]: time="2025-09-12T22:54:17.924521045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40501deced19feea3ab27d21ffbff400,Namespace:kube-system,Attempt:0,}" Sep 12 22:54:17.926181 kubelet[2407]: E0912 22:54:17.926110 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:17.926828 containerd[1570]: time="2025-09-12T22:54:17.926718824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 22:54:17.969472 kubelet[2407]: I0912 22:54:17.969428 2407 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:54:17.970297 kubelet[2407]: E0912 22:54:17.970215 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:54:18.001678 containerd[1570]: time="2025-09-12T22:54:18.000709680Z" level=info msg="connecting to shim b433785dcbca41cd7254804bbd69ab204a78d6b1c01a6b28dcdac993d10525ff" address="unix:///run/containerd/s/0a011c981a302badac82a981f8937dc4a1b06fe423b4642c961e537c93475a77" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:54:18.016100 containerd[1570]: time="2025-09-12T22:54:18.015845725Z" level=info msg="connecting to shim c82c37d79f142ac739e9e6717799f6a14429d27677435e1e34ff2f3565ffed2b" address="unix:///run/containerd/s/e492a7ad997976a86e717f554727a32bef9e07bc9b8197993151886e47b4f584" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:54:18.018018 containerd[1570]: time="2025-09-12T22:54:18.017970053Z" level=info msg="connecting to shim cabf89e01d47896d004966ce72f78b4e9e851fb8ee7b47dcbef47eabec2c4ee4" address="unix:///run/containerd/s/2cdc383c7d5a4e5d65aa2584e68db89649391d9abd14da8906eba9780a529341" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:54:18.068659 systemd[1]: Started cri-containerd-b433785dcbca41cd7254804bbd69ab204a78d6b1c01a6b28dcdac993d10525ff.scope - libcontainer container b433785dcbca41cd7254804bbd69ab204a78d6b1c01a6b28dcdac993d10525ff. Sep 12 22:54:18.074044 systemd[1]: Started cri-containerd-c82c37d79f142ac739e9e6717799f6a14429d27677435e1e34ff2f3565ffed2b.scope - libcontainer container c82c37d79f142ac739e9e6717799f6a14429d27677435e1e34ff2f3565ffed2b. Sep 12 22:54:18.083745 systemd[1]: Started cri-containerd-cabf89e01d47896d004966ce72f78b4e9e851fb8ee7b47dcbef47eabec2c4ee4.scope - libcontainer container cabf89e01d47896d004966ce72f78b4e9e851fb8ee7b47dcbef47eabec2c4ee4. Sep 12 22:54:18.183099 kubelet[2407]: W0912 22:54:18.182877 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:18.183099 kubelet[2407]: E0912 22:54:18.182993 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:18.249089 kubelet[2407]: E0912 22:54:18.249022 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Sep 12 22:54:18.256695 kubelet[2407]: W0912 22:54:18.256619 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:18.256695 kubelet[2407]: E0912 22:54:18.256694 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:18.308945 containerd[1570]: time="2025-09-12T22:54:18.308887231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b433785dcbca41cd7254804bbd69ab204a78d6b1c01a6b28dcdac993d10525ff\"" Sep 12 22:54:18.310390 kubelet[2407]: E0912 22:54:18.310345 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:18.312355 containerd[1570]: time="2025-09-12T22:54:18.312317217Z" level=info msg="CreateContainer within sandbox \"b433785dcbca41cd7254804bbd69ab204a78d6b1c01a6b28dcdac993d10525ff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 22:54:18.359513 kubelet[2407]: W0912 22:54:18.359407 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:18.359513 kubelet[2407]: E0912 22:54:18.359487 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:18.413446 containerd[1570]: time="2025-09-12T22:54:18.413356199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:40501deced19feea3ab27d21ffbff400,Namespace:kube-system,Attempt:0,} returns sandbox id \"cabf89e01d47896d004966ce72f78b4e9e851fb8ee7b47dcbef47eabec2c4ee4\"" Sep 12 22:54:18.414438 kubelet[2407]: E0912 22:54:18.414379 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:18.416377 containerd[1570]: time="2025-09-12T22:54:18.416344470Z" level=info msg="CreateContainer within sandbox \"cabf89e01d47896d004966ce72f78b4e9e851fb8ee7b47dcbef47eabec2c4ee4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 22:54:18.450348 containerd[1570]: time="2025-09-12T22:54:18.449470487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c82c37d79f142ac739e9e6717799f6a14429d27677435e1e34ff2f3565ffed2b\"" Sep 12 22:54:18.450474 kubelet[2407]: E0912 22:54:18.450413 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:18.452787 containerd[1570]: time="2025-09-12T22:54:18.452717086Z" level=info msg="CreateContainer within sandbox \"c82c37d79f142ac739e9e6717799f6a14429d27677435e1e34ff2f3565ffed2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 22:54:18.772366 kubelet[2407]: I0912 22:54:18.772320 2407 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:54:18.772878 kubelet[2407]: E0912 22:54:18.772832 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:54:18.874678 kubelet[2407]: E0912 22:54:18.874599 2407 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:19.097944 containerd[1570]: time="2025-09-12T22:54:19.097625745Z" level=info msg="Container d69b5af8bcaf0f5ec0005b935ae1ab3461246fc270096ade9de2a598a38dd242: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:54:19.373004 kubelet[2407]: W0912 22:54:19.372827 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 12 22:54:19.373004 kubelet[2407]: E0912 22:54:19.372895 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:54:19.434737 containerd[1570]: time="2025-09-12T22:54:19.434682096Z" level=info msg="Container 2212dd8606610b57aac43d8e28433ad3425605431c73bf1daab51e243cbac9bd: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:54:19.693365 kubelet[2407]: E0912 22:54:19.693070 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864aae56660e352 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 22:54:16.82319445 +0000 UTC m=+0.707586006,LastTimestamp:2025-09-12 22:54:16.82319445 +0000 UTC m=+0.707586006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 22:54:19.850437 kubelet[2407]: E0912 22:54:19.850325 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="3.2s" Sep 12 22:54:19.861420 containerd[1570]: time="2025-09-12T22:54:19.861336374Z" level=info msg="CreateContainer within sandbox \"b433785dcbca41cd7254804bbd69ab204a78d6b1c01a6b28dcdac993d10525ff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d69b5af8bcaf0f5ec0005b935ae1ab3461246fc270096ade9de2a598a38dd242\"" Sep 12 22:54:19.862226 containerd[1570]: time="2025-09-12T22:54:19.862195899Z" level=info msg="StartContainer for \"d69b5af8bcaf0f5ec0005b935ae1ab3461246fc270096ade9de2a598a38dd242\"" Sep 12 22:54:19.863866 containerd[1570]: time="2025-09-12T22:54:19.863824056Z" level=info msg="connecting to shim d69b5af8bcaf0f5ec0005b935ae1ab3461246fc270096ade9de2a598a38dd242" address="unix:///run/containerd/s/0a011c981a302badac82a981f8937dc4a1b06fe423b4642c961e537c93475a77" protocol=ttrpc version=3 Sep 12 22:54:19.888547 systemd[1]: Started cri-containerd-d69b5af8bcaf0f5ec0005b935ae1ab3461246fc270096ade9de2a598a38dd242.scope - libcontainer container d69b5af8bcaf0f5ec0005b935ae1ab3461246fc270096ade9de2a598a38dd242. Sep 12 22:54:19.920382 containerd[1570]: time="2025-09-12T22:54:19.919935627Z" level=info msg="Container da307b780f1e8b100e3b7c9980e5ab1ea3d1190231639408c36ad17554b45d3a: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:54:20.127686 containerd[1570]: time="2025-09-12T22:54:20.127562660Z" level=info msg="StartContainer for \"d69b5af8bcaf0f5ec0005b935ae1ab3461246fc270096ade9de2a598a38dd242\" returns successfully" Sep 12 22:54:20.224081 containerd[1570]: time="2025-09-12T22:54:20.224024198Z" level=info msg="CreateContainer within sandbox \"c82c37d79f142ac739e9e6717799f6a14429d27677435e1e34ff2f3565ffed2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da307b780f1e8b100e3b7c9980e5ab1ea3d1190231639408c36ad17554b45d3a\"" Sep 12 22:54:20.225008 containerd[1570]: time="2025-09-12T22:54:20.224945589Z" level=info msg="StartContainer for \"da307b780f1e8b100e3b7c9980e5ab1ea3d1190231639408c36ad17554b45d3a\"" Sep 12 22:54:20.226581 containerd[1570]: time="2025-09-12T22:54:20.226547677Z" level=info msg="connecting to shim da307b780f1e8b100e3b7c9980e5ab1ea3d1190231639408c36ad17554b45d3a" address="unix:///run/containerd/s/e492a7ad997976a86e717f554727a32bef9e07bc9b8197993151886e47b4f584" protocol=ttrpc version=3 Sep 12 22:54:20.230689 containerd[1570]: time="2025-09-12T22:54:20.230525612Z" level=info msg="CreateContainer within sandbox \"cabf89e01d47896d004966ce72f78b4e9e851fb8ee7b47dcbef47eabec2c4ee4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2212dd8606610b57aac43d8e28433ad3425605431c73bf1daab51e243cbac9bd\"" Sep 12 22:54:20.232331 containerd[1570]: time="2025-09-12T22:54:20.231457632Z" level=info msg="StartContainer for \"2212dd8606610b57aac43d8e28433ad3425605431c73bf1daab51e243cbac9bd\"" Sep 12 22:54:20.234170 containerd[1570]: time="2025-09-12T22:54:20.234088272Z" level=info msg="connecting to shim 2212dd8606610b57aac43d8e28433ad3425605431c73bf1daab51e243cbac9bd" address="unix:///run/containerd/s/2cdc383c7d5a4e5d65aa2584e68db89649391d9abd14da8906eba9780a529341" protocol=ttrpc version=3 Sep 12 22:54:20.257418 systemd[1]: Started cri-containerd-da307b780f1e8b100e3b7c9980e5ab1ea3d1190231639408c36ad17554b45d3a.scope - libcontainer container da307b780f1e8b100e3b7c9980e5ab1ea3d1190231639408c36ad17554b45d3a. Sep 12 22:54:20.261591 systemd[1]: Started cri-containerd-2212dd8606610b57aac43d8e28433ad3425605431c73bf1daab51e243cbac9bd.scope - libcontainer container 2212dd8606610b57aac43d8e28433ad3425605431c73bf1daab51e243cbac9bd. Sep 12 22:54:20.375981 kubelet[2407]: I0912 22:54:20.375941 2407 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:54:20.377175 kubelet[2407]: E0912 22:54:20.377025 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 12 22:54:20.659812 containerd[1570]: time="2025-09-12T22:54:20.659606810Z" level=info msg="StartContainer for \"2212dd8606610b57aac43d8e28433ad3425605431c73bf1daab51e243cbac9bd\" returns successfully" Sep 12 22:54:20.662053 containerd[1570]: time="2025-09-12T22:54:20.662023615Z" level=info msg="StartContainer for \"da307b780f1e8b100e3b7c9980e5ab1ea3d1190231639408c36ad17554b45d3a\" returns successfully" Sep 12 22:54:20.901086 kubelet[2407]: E0912 22:54:20.901042 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:20.904283 kubelet[2407]: E0912 22:54:20.904234 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:20.907837 kubelet[2407]: E0912 22:54:20.907809 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:21.909298 kubelet[2407]: E0912 22:54:21.908865 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:21.909298 kubelet[2407]: E0912 22:54:21.909012 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:21.909298 kubelet[2407]: E0912 22:54:21.909042 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:22.779013 kubelet[2407]: E0912 22:54:22.778961 2407 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 12 22:54:22.816434 kubelet[2407]: I0912 22:54:22.816335 2407 apiserver.go:52] "Watching apiserver" Sep 12 22:54:22.845974 kubelet[2407]: I0912 22:54:22.845880 2407 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 22:54:22.910110 kubelet[2407]: E0912 22:54:22.910065 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:22.910110 kubelet[2407]: E0912 22:54:22.910129 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:23.055418 kubelet[2407]: E0912 22:54:23.054903 2407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 22:54:23.148094 kubelet[2407]: E0912 22:54:23.148004 2407 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 12 22:54:23.578670 kubelet[2407]: I0912 22:54:23.578627 2407 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:54:23.776816 kubelet[2407]: I0912 22:54:23.776743 2407 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 22:54:25.522768 kubelet[2407]: E0912 22:54:25.522681 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:25.916254 kubelet[2407]: E0912 22:54:25.916101 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:26.905206 kubelet[2407]: I0912 22:54:26.905116 2407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.905071065 podStartE2EDuration="1.905071065s" podCreationTimestamp="2025-09-12 22:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:54:26.904984972 +0000 UTC m=+10.789376528" watchObservedRunningTime="2025-09-12 22:54:26.905071065 +0000 UTC m=+10.789462621" Sep 12 22:54:27.543421 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-9.scope)... Sep 12 22:54:27.543452 systemd[1]: Reloading... Sep 12 22:54:27.655759 zram_generator::config[2730]: No configuration found. Sep 12 22:54:27.979248 systemd[1]: Reloading finished in 435 ms. Sep 12 22:54:28.009580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:28.028306 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 22:54:28.028705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:28.028790 systemd[1]: kubelet.service: Consumed 1.522s CPU time, 131.1M memory peak. Sep 12 22:54:28.032516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:54:28.310991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:54:28.323869 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:54:28.375070 kubelet[2772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:54:28.375070 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 22:54:28.375070 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:54:28.377296 kubelet[2772]: I0912 22:54:28.377054 2772 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:54:28.385155 kubelet[2772]: I0912 22:54:28.385078 2772 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 22:54:28.385155 kubelet[2772]: I0912 22:54:28.385119 2772 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:54:28.385510 kubelet[2772]: I0912 22:54:28.385476 2772 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 22:54:28.387180 kubelet[2772]: I0912 22:54:28.387136 2772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 22:54:28.389665 kubelet[2772]: I0912 22:54:28.389590 2772 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:54:28.397037 kubelet[2772]: I0912 22:54:28.396980 2772 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:54:28.402721 kubelet[2772]: I0912 22:54:28.402658 2772 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:54:28.402946 kubelet[2772]: I0912 22:54:28.402913 2772 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 22:54:28.403148 kubelet[2772]: I0912 22:54:28.403100 2772 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:54:28.403446 kubelet[2772]: I0912 22:54:28.403139 2772 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:54:28.403566 kubelet[2772]: I0912 22:54:28.403453 2772 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:54:28.403566 kubelet[2772]: I0912 22:54:28.403468 2772 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 22:54:28.403566 kubelet[2772]: I0912 22:54:28.403512 2772 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:54:28.403715 kubelet[2772]: I0912 22:54:28.403686 2772 kubelet.go:408] "Attempting to sync node with API server" Sep 12 22:54:28.403715 kubelet[2772]: I0912 22:54:28.403709 2772 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:54:28.403768 kubelet[2772]: I0912 22:54:28.403756 2772 kubelet.go:314] "Adding apiserver pod source" Sep 12 22:54:28.403793 kubelet[2772]: I0912 22:54:28.403771 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:54:28.404622 kubelet[2772]: I0912 22:54:28.404595 2772 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:54:28.405143 kubelet[2772]: I0912 22:54:28.405089 2772 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:54:28.405732 kubelet[2772]: I0912 22:54:28.405690 2772 server.go:1274] "Started kubelet" Sep 12 22:54:28.408367 kubelet[2772]: I0912 22:54:28.408321 2772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:54:28.410289 kubelet[2772]: I0912 22:54:28.408727 2772 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:54:28.410289 kubelet[2772]: I0912 22:54:28.408808 2772 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:54:28.410289 kubelet[2772]: I0912 22:54:28.410071 2772 server.go:449] "Adding debug handlers to kubelet server" Sep 12 22:54:28.412434 kubelet[2772]: I0912 22:54:28.412398 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:54:28.412592 kubelet[2772]: I0912 22:54:28.412559 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:54:28.415347 kubelet[2772]: I0912 22:54:28.415304 2772 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 22:54:28.415465 kubelet[2772]: I0912 22:54:28.415418 2772 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 22:54:28.415611 kubelet[2772]: I0912 22:54:28.415581 2772 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:54:28.419094 kubelet[2772]: E0912 22:54:28.419027 2772 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 22:54:28.421692 kubelet[2772]: I0912 22:54:28.419814 2772 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:54:28.421692 kubelet[2772]: I0912 22:54:28.419957 2772 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:54:28.421692 kubelet[2772]: E0912 22:54:28.420440 2772 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:54:28.426503 kubelet[2772]: I0912 22:54:28.426461 2772 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:54:28.433491 kubelet[2772]: I0912 22:54:28.433406 2772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:54:28.435288 kubelet[2772]: I0912 22:54:28.435139 2772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:54:28.435288 kubelet[2772]: I0912 22:54:28.435175 2772 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 22:54:28.435409 kubelet[2772]: I0912 22:54:28.435363 2772 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 22:54:28.435453 kubelet[2772]: E0912 22:54:28.435432 2772 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:54:28.468235 kubelet[2772]: I0912 22:54:28.468184 2772 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 22:54:28.468235 kubelet[2772]: I0912 22:54:28.468206 2772 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 22:54:28.468235 kubelet[2772]: I0912 22:54:28.468228 2772 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:54:28.468572 kubelet[2772]: I0912 22:54:28.468453 2772 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 22:54:28.468572 kubelet[2772]: I0912 22:54:28.468467 2772 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 22:54:28.468572 kubelet[2772]: I0912 22:54:28.468488 2772 policy_none.go:49] "None policy: Start" Sep 12 22:54:28.469578 kubelet[2772]: I0912 22:54:28.469546 2772 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 22:54:28.469578 kubelet[2772]: I0912 22:54:28.469581 2772 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:54:28.469766 kubelet[2772]: I0912 22:54:28.469742 2772 state_mem.go:75] "Updated machine memory state" Sep 12 22:54:28.475995 kubelet[2772]: I0912 22:54:28.475928 2772 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:54:28.476259 kubelet[2772]: I0912 22:54:28.476231 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:54:28.476344 kubelet[2772]: I0912 22:54:28.476250 2772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:54:28.477101 kubelet[2772]: I0912 22:54:28.476774 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:54:28.582757 kubelet[2772]: I0912 22:54:28.582609 2772 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 22:54:28.616783 kubelet[2772]: I0912 22:54:28.616707 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 22:54:28.616783 kubelet[2772]: I0912 22:54:28.616763 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40501deced19feea3ab27d21ffbff400-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"40501deced19feea3ab27d21ffbff400\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:54:28.616783 kubelet[2772]: I0912 22:54:28.616795 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:28.617073 kubelet[2772]: I0912 22:54:28.616816 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:28.617073 kubelet[2772]: I0912 22:54:28.616840 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40501deced19feea3ab27d21ffbff400-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"40501deced19feea3ab27d21ffbff400\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:54:28.617073 kubelet[2772]: I0912 22:54:28.616859 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40501deced19feea3ab27d21ffbff400-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"40501deced19feea3ab27d21ffbff400\") " pod="kube-system/kube-apiserver-localhost" Sep 12 22:54:28.617073 kubelet[2772]: I0912 22:54:28.616877 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:28.617073 kubelet[2772]: I0912 22:54:28.616900 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:28.617189 kubelet[2772]: I0912 22:54:28.616918 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:28.723089 kubelet[2772]: E0912 22:54:28.723032 2772 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 22:54:28.723391 kubelet[2772]: E0912 22:54:28.723343 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:28.858802 kubelet[2772]: I0912 22:54:28.858597 2772 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 22:54:28.858802 kubelet[2772]: I0912 22:54:28.858734 2772 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 22:54:28.974227 kubelet[2772]: E0912 22:54:28.974092 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:28.974227 kubelet[2772]: E0912 22:54:28.974089 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:29.405127 kubelet[2772]: I0912 22:54:29.405044 2772 apiserver.go:52] "Watching apiserver" Sep 12 22:54:29.416504 kubelet[2772]: I0912 22:54:29.416418 2772 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 22:54:29.451510 kubelet[2772]: E0912 22:54:29.451453 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:29.473213 kubelet[2772]: E0912 22:54:29.473122 2772 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 22:54:29.473419 kubelet[2772]: E0912 22:54:29.473379 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:29.473883 kubelet[2772]: E0912 22:54:29.473857 2772 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 22:54:29.474015 kubelet[2772]: E0912 22:54:29.473991 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:29.489288 kubelet[2772]: I0912 22:54:29.489173 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.48914539 podStartE2EDuration="1.48914539s" podCreationTimestamp="2025-09-12 22:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:54:29.474235753 +0000 UTC m=+1.145263083" watchObservedRunningTime="2025-09-12 22:54:29.48914539 +0000 UTC m=+1.160172720" Sep 12 22:54:29.514307 kubelet[2772]: I0912 22:54:29.513468 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5134442369999999 podStartE2EDuration="1.513444237s" podCreationTimestamp="2025-09-12 22:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:54:29.512602071 +0000 UTC m=+1.183629401" watchObservedRunningTime="2025-09-12 22:54:29.513444237 +0000 UTC m=+1.184471567" Sep 12 22:54:30.453723 kubelet[2772]: E0912 22:54:30.453655 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:30.454996 kubelet[2772]: E0912 22:54:30.454957 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:32.025252 kubelet[2772]: I0912 22:54:32.025171 2772 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 22:54:32.026257 containerd[1570]: time="2025-09-12T22:54:32.025713889Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 22:54:32.026789 kubelet[2772]: I0912 22:54:32.026734 2772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 22:54:33.600826 systemd[1]: Created slice kubepods-besteffort-podc3faed44_9386_4bc3_be58_674a1c3072be.slice - libcontainer container kubepods-besteffort-podc3faed44_9386_4bc3_be58_674a1c3072be.slice. Sep 12 22:54:33.764553 kubelet[2772]: I0912 22:54:33.764447 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3faed44-9386-4bc3-be58-674a1c3072be-kube-proxy\") pod \"kube-proxy-6242t\" (UID: \"c3faed44-9386-4bc3-be58-674a1c3072be\") " pod="kube-system/kube-proxy-6242t" Sep 12 22:54:33.764553 kubelet[2772]: I0912 22:54:33.764502 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3faed44-9386-4bc3-be58-674a1c3072be-lib-modules\") pod \"kube-proxy-6242t\" (UID: \"c3faed44-9386-4bc3-be58-674a1c3072be\") " pod="kube-system/kube-proxy-6242t" Sep 12 22:54:33.765113 kubelet[2772]: I0912 22:54:33.764597 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3faed44-9386-4bc3-be58-674a1c3072be-xtables-lock\") pod \"kube-proxy-6242t\" (UID: \"c3faed44-9386-4bc3-be58-674a1c3072be\") " pod="kube-system/kube-proxy-6242t" Sep 12 22:54:33.765113 kubelet[2772]: I0912 22:54:33.764626 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgb8\" (UniqueName: \"kubernetes.io/projected/c3faed44-9386-4bc3-be58-674a1c3072be-kube-api-access-xlgb8\") pod \"kube-proxy-6242t\" (UID: \"c3faed44-9386-4bc3-be58-674a1c3072be\") " pod="kube-system/kube-proxy-6242t" Sep 12 22:54:34.159840 systemd[1]: Created slice kubepods-besteffort-podedc997f4_11dd_4d53_ab90_2e7350453f71.slice - libcontainer container kubepods-besteffort-podedc997f4_11dd_4d53_ab90_2e7350453f71.slice. Sep 12 22:54:34.167042 kubelet[2772]: I0912 22:54:34.166975 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78w2w\" (UniqueName: \"kubernetes.io/projected/edc997f4-11dd-4d53-ab90-2e7350453f71-kube-api-access-78w2w\") pod \"tigera-operator-58fc44c59b-bs9f5\" (UID: \"edc997f4-11dd-4d53-ab90-2e7350453f71\") " pod="tigera-operator/tigera-operator-58fc44c59b-bs9f5" Sep 12 22:54:34.167042 kubelet[2772]: I0912 22:54:34.167037 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/edc997f4-11dd-4d53-ab90-2e7350453f71-var-lib-calico\") pod \"tigera-operator-58fc44c59b-bs9f5\" (UID: \"edc997f4-11dd-4d53-ab90-2e7350453f71\") " pod="tigera-operator/tigera-operator-58fc44c59b-bs9f5" Sep 12 22:54:34.175847 kubelet[2772]: E0912 22:54:34.174639 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:34.217461 kubelet[2772]: E0912 22:54:34.217395 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:34.218658 containerd[1570]: time="2025-09-12T22:54:34.218588170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6242t,Uid:c3faed44-9386-4bc3-be58-674a1c3072be,Namespace:kube-system,Attempt:0,}" Sep 12 22:54:34.461878 kubelet[2772]: E0912 22:54:34.461693 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:34.470208 containerd[1570]: time="2025-09-12T22:54:34.470115969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-bs9f5,Uid:edc997f4-11dd-4d53-ab90-2e7350453f71,Namespace:tigera-operator,Attempt:0,}" Sep 12 22:54:34.921558 containerd[1570]: time="2025-09-12T22:54:34.921506366Z" level=info msg="connecting to shim 218a4b7fc9d9984020e91e6b5a231b269effe5bcf9c1c70181e741bcfa045a9d" address="unix:///run/containerd/s/64114ec357a4a00a7b12db2c95c0fc580db4858383a331cb6a76d32485ac71f5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:54:34.968664 systemd[1]: Started cri-containerd-218a4b7fc9d9984020e91e6b5a231b269effe5bcf9c1c70181e741bcfa045a9d.scope - libcontainer container 218a4b7fc9d9984020e91e6b5a231b269effe5bcf9c1c70181e741bcfa045a9d. Sep 12 22:54:35.067528 containerd[1570]: time="2025-09-12T22:54:35.067437537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6242t,Uid:c3faed44-9386-4bc3-be58-674a1c3072be,Namespace:kube-system,Attempt:0,} returns sandbox id \"218a4b7fc9d9984020e91e6b5a231b269effe5bcf9c1c70181e741bcfa045a9d\"" Sep 12 22:54:35.073809 kubelet[2772]: E0912 22:54:35.069746 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:35.075441 containerd[1570]: time="2025-09-12T22:54:35.075373151Z" level=info msg="CreateContainer within sandbox \"218a4b7fc9d9984020e91e6b5a231b269effe5bcf9c1c70181e741bcfa045a9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 22:54:35.531879 containerd[1570]: time="2025-09-12T22:54:35.531807836Z" level=info msg="connecting to shim 1bd2fa3d1286020d986f65e7f9e9a70b85c7b5b5073173f5abc7b1e96ad2a2dd" address="unix:///run/containerd/s/2b0859734c794def68810ed1260975e1b7ed875b08f5722737ceccf331e5360e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:54:35.562478 systemd[1]: Started cri-containerd-1bd2fa3d1286020d986f65e7f9e9a70b85c7b5b5073173f5abc7b1e96ad2a2dd.scope - libcontainer container 1bd2fa3d1286020d986f65e7f9e9a70b85c7b5b5073173f5abc7b1e96ad2a2dd. Sep 12 22:54:35.773768 containerd[1570]: time="2025-09-12T22:54:35.773696109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-bs9f5,Uid:edc997f4-11dd-4d53-ab90-2e7350453f71,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1bd2fa3d1286020d986f65e7f9e9a70b85c7b5b5073173f5abc7b1e96ad2a2dd\"" Sep 12 22:54:35.776138 containerd[1570]: time="2025-09-12T22:54:35.776077227Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 22:54:36.263682 containerd[1570]: time="2025-09-12T22:54:36.263581917Z" level=info msg="Container dba74dd801b9603fc8f663216adef9727f99b52fbdb62f5e5af0bb0e58d30f76: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:54:36.267174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189137464.mount: Deactivated successfully. Sep 12 22:54:36.388635 kubelet[2772]: E0912 22:54:36.388517 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:36.468550 kubelet[2772]: E0912 22:54:36.467529 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:36.651057 containerd[1570]: time="2025-09-12T22:54:36.650989800Z" level=info msg="CreateContainer within sandbox \"218a4b7fc9d9984020e91e6b5a231b269effe5bcf9c1c70181e741bcfa045a9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dba74dd801b9603fc8f663216adef9727f99b52fbdb62f5e5af0bb0e58d30f76\"" Sep 12 22:54:36.651956 containerd[1570]: time="2025-09-12T22:54:36.651917575Z" level=info msg="StartContainer for \"dba74dd801b9603fc8f663216adef9727f99b52fbdb62f5e5af0bb0e58d30f76\"" Sep 12 22:54:36.654006 containerd[1570]: time="2025-09-12T22:54:36.653956850Z" level=info msg="connecting to shim dba74dd801b9603fc8f663216adef9727f99b52fbdb62f5e5af0bb0e58d30f76" address="unix:///run/containerd/s/64114ec357a4a00a7b12db2c95c0fc580db4858383a331cb6a76d32485ac71f5" protocol=ttrpc version=3 Sep 12 22:54:36.677410 kubelet[2772]: E0912 22:54:36.677363 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:36.680478 systemd[1]: Started cri-containerd-dba74dd801b9603fc8f663216adef9727f99b52fbdb62f5e5af0bb0e58d30f76.scope - libcontainer container dba74dd801b9603fc8f663216adef9727f99b52fbdb62f5e5af0bb0e58d30f76. Sep 12 22:54:36.785780 containerd[1570]: time="2025-09-12T22:54:36.785721848Z" level=info msg="StartContainer for \"dba74dd801b9603fc8f663216adef9727f99b52fbdb62f5e5af0bb0e58d30f76\" returns successfully" Sep 12 22:54:37.473766 kubelet[2772]: E0912 22:54:37.472528 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:37.473766 kubelet[2772]: E0912 22:54:37.472653 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:38.374026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1152606421.mount: Deactivated successfully. Sep 12 22:54:38.475840 kubelet[2772]: E0912 22:54:38.475805 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:38.797119 containerd[1570]: time="2025-09-12T22:54:38.797028615Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:38.798202 containerd[1570]: time="2025-09-12T22:54:38.798163959Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 22:54:38.800157 containerd[1570]: time="2025-09-12T22:54:38.800065082Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:38.802928 containerd[1570]: time="2025-09-12T22:54:38.802872679Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:38.803611 containerd[1570]: time="2025-09-12T22:54:38.803556164Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.027427912s" Sep 12 22:54:38.803661 containerd[1570]: time="2025-09-12T22:54:38.803611960Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 22:54:38.806652 containerd[1570]: time="2025-09-12T22:54:38.806597861Z" level=info msg="CreateContainer within sandbox \"1bd2fa3d1286020d986f65e7f9e9a70b85c7b5b5073173f5abc7b1e96ad2a2dd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 22:54:38.819210 containerd[1570]: time="2025-09-12T22:54:38.819133255Z" level=info msg="Container 192744fed22216204b5983926b96df980722b9c8094e0397c51cdd604b41f040: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:54:38.827295 containerd[1570]: time="2025-09-12T22:54:38.827209345Z" level=info msg="CreateContainer within sandbox \"1bd2fa3d1286020d986f65e7f9e9a70b85c7b5b5073173f5abc7b1e96ad2a2dd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"192744fed22216204b5983926b96df980722b9c8094e0397c51cdd604b41f040\"" Sep 12 22:54:38.827887 containerd[1570]: time="2025-09-12T22:54:38.827840802Z" level=info msg="StartContainer for \"192744fed22216204b5983926b96df980722b9c8094e0397c51cdd604b41f040\"" Sep 12 22:54:38.829052 containerd[1570]: time="2025-09-12T22:54:38.829005983Z" level=info msg="connecting to shim 192744fed22216204b5983926b96df980722b9c8094e0397c51cdd604b41f040" address="unix:///run/containerd/s/2b0859734c794def68810ed1260975e1b7ed875b08f5722737ceccf331e5360e" protocol=ttrpc version=3 Sep 12 22:54:38.907565 systemd[1]: Started cri-containerd-192744fed22216204b5983926b96df980722b9c8094e0397c51cdd604b41f040.scope - libcontainer container 192744fed22216204b5983926b96df980722b9c8094e0397c51cdd604b41f040. Sep 12 22:54:38.958155 containerd[1570]: time="2025-09-12T22:54:38.958107545Z" level=info msg="StartContainer for \"192744fed22216204b5983926b96df980722b9c8094e0397c51cdd604b41f040\" returns successfully" Sep 12 22:54:39.490516 kubelet[2772]: I0912 22:54:39.490429 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6242t" podStartSLOduration=6.490402168 podStartE2EDuration="6.490402168s" podCreationTimestamp="2025-09-12 22:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:54:37.531368916 +0000 UTC m=+9.202396256" watchObservedRunningTime="2025-09-12 22:54:39.490402168 +0000 UTC m=+11.161429508" Sep 12 22:54:44.729211 sudo[1811]: pam_unix(sudo:session): session closed for user root Sep 12 22:54:44.731745 sshd[1810]: Connection closed by 10.0.0.1 port 55952 Sep 12 22:54:44.732836 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Sep 12 22:54:44.746996 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:55952.service: Deactivated successfully. Sep 12 22:54:44.754307 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 22:54:44.754679 systemd[1]: session-9.scope: Consumed 7.294s CPU time, 225.8M memory peak. Sep 12 22:54:44.757186 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Sep 12 22:54:44.762063 systemd-logind[1550]: Removed session 9. Sep 12 22:54:48.445996 kubelet[2772]: I0912 22:54:48.445878 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-bs9f5" podStartSLOduration=12.416704868 podStartE2EDuration="15.445849497s" podCreationTimestamp="2025-09-12 22:54:33 +0000 UTC" firstStartedPulling="2025-09-12 22:54:35.775539736 +0000 UTC m=+7.446567066" lastFinishedPulling="2025-09-12 22:54:38.804684365 +0000 UTC m=+10.475711695" observedRunningTime="2025-09-12 22:54:39.490683447 +0000 UTC m=+11.161710777" watchObservedRunningTime="2025-09-12 22:54:48.445849497 +0000 UTC m=+20.116876837" Sep 12 22:54:48.461565 kubelet[2772]: I0912 22:54:48.461494 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjv2q\" (UniqueName: \"kubernetes.io/projected/a050dd1f-0f76-473d-8096-3d9409ed5ec6-kube-api-access-zjv2q\") pod \"calico-typha-658689bbc9-5tmfq\" (UID: \"a050dd1f-0f76-473d-8096-3d9409ed5ec6\") " pod="calico-system/calico-typha-658689bbc9-5tmfq" Sep 12 22:54:48.461565 kubelet[2772]: I0912 22:54:48.461571 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a050dd1f-0f76-473d-8096-3d9409ed5ec6-tigera-ca-bundle\") pod \"calico-typha-658689bbc9-5tmfq\" (UID: \"a050dd1f-0f76-473d-8096-3d9409ed5ec6\") " pod="calico-system/calico-typha-658689bbc9-5tmfq" Sep 12 22:54:48.461741 kubelet[2772]: I0912 22:54:48.461598 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a050dd1f-0f76-473d-8096-3d9409ed5ec6-typha-certs\") pod \"calico-typha-658689bbc9-5tmfq\" (UID: \"a050dd1f-0f76-473d-8096-3d9409ed5ec6\") " pod="calico-system/calico-typha-658689bbc9-5tmfq" Sep 12 22:54:48.463249 systemd[1]: Created slice kubepods-besteffort-poda050dd1f_0f76_473d_8096_3d9409ed5ec6.slice - libcontainer container kubepods-besteffort-poda050dd1f_0f76_473d_8096_3d9409ed5ec6.slice. Sep 12 22:54:48.769682 kubelet[2772]: E0912 22:54:48.769616 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:48.770416 containerd[1570]: time="2025-09-12T22:54:48.770360711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658689bbc9-5tmfq,Uid:a050dd1f-0f76-473d-8096-3d9409ed5ec6,Namespace:calico-system,Attempt:0,}" Sep 12 22:54:48.877922 systemd[1]: Created slice kubepods-besteffort-pod0b1aafd4_ef70_4848_b9d2_da45001c09cc.slice - libcontainer container kubepods-besteffort-pod0b1aafd4_ef70_4848_b9d2_da45001c09cc.slice. Sep 12 22:54:48.887973 containerd[1570]: time="2025-09-12T22:54:48.887422771Z" level=info msg="connecting to shim a7e97f2127331f7045b85f7c805df2bc14ef8811d21991473930f081be6baa52" address="unix:///run/containerd/s/b6c1e9d6682a171cc3e36424a0df1609afc9ed0239d56af1c53efdb49a0c176a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:54:48.923441 systemd[1]: Started cri-containerd-a7e97f2127331f7045b85f7c805df2bc14ef8811d21991473930f081be6baa52.scope - libcontainer container a7e97f2127331f7045b85f7c805df2bc14ef8811d21991473930f081be6baa52. Sep 12 22:54:49.015494 containerd[1570]: time="2025-09-12T22:54:49.015378963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658689bbc9-5tmfq,Uid:a050dd1f-0f76-473d-8096-3d9409ed5ec6,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7e97f2127331f7045b85f7c805df2bc14ef8811d21991473930f081be6baa52\"" Sep 12 22:54:49.016257 kubelet[2772]: E0912 22:54:49.016227 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:49.017066 containerd[1570]: time="2025-09-12T22:54:49.017033099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 22:54:49.065919 kubelet[2772]: I0912 22:54:49.065747 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0b1aafd4-ef70-4848-b9d2-da45001c09cc-node-certs\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.065919 kubelet[2772]: I0912 22:54:49.065813 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-xtables-lock\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.065919 kubelet[2772]: I0912 22:54:49.065843 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4xt\" (UniqueName: \"kubernetes.io/projected/0b1aafd4-ef70-4848-b9d2-da45001c09cc-kube-api-access-wj4xt\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.065919 kubelet[2772]: I0912 22:54:49.065904 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-lib-modules\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066186 kubelet[2772]: I0912 22:54:49.065942 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-var-lib-calico\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066186 kubelet[2772]: I0912 22:54:49.065964 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-cni-net-dir\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066186 kubelet[2772]: I0912 22:54:49.065985 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b1aafd4-ef70-4848-b9d2-da45001c09cc-tigera-ca-bundle\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066186 kubelet[2772]: I0912 22:54:49.066006 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-var-run-calico\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066186 kubelet[2772]: I0912 22:54:49.066024 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-cni-log-dir\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066382 kubelet[2772]: I0912 22:54:49.066042 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-flexvol-driver-host\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066382 kubelet[2772]: I0912 22:54:49.066063 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-cni-bin-dir\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.066382 kubelet[2772]: I0912 22:54:49.066088 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0b1aafd4-ef70-4848-b9d2-da45001c09cc-policysync\") pod \"calico-node-j2fbt\" (UID: \"0b1aafd4-ef70-4848-b9d2-da45001c09cc\") " pod="calico-system/calico-node-j2fbt" Sep 12 22:54:49.106670 kubelet[2772]: E0912 22:54:49.106600 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:54:49.177295 kubelet[2772]: E0912 22:54:49.176991 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.177295 kubelet[2772]: W0912 22:54:49.177024 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.177295 kubelet[2772]: E0912 22:54:49.177050 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.181326 kubelet[2772]: E0912 22:54:49.181257 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.181326 kubelet[2772]: W0912 22:54:49.181314 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.181326 kubelet[2772]: E0912 22:54:49.181334 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.269237 kubelet[2772]: E0912 22:54:49.268946 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.269237 kubelet[2772]: W0912 22:54:49.268973 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.269237 kubelet[2772]: E0912 22:54:49.268997 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.269237 kubelet[2772]: I0912 22:54:49.269021 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab10c388-eebf-432c-927b-a19629315019-kubelet-dir\") pod \"csi-node-driver-nxg4j\" (UID: \"ab10c388-eebf-432c-927b-a19629315019\") " pod="calico-system/csi-node-driver-nxg4j" Sep 12 22:54:49.271600 kubelet[2772]: E0912 22:54:49.271540 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.271600 kubelet[2772]: W0912 22:54:49.271576 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.271782 kubelet[2772]: E0912 22:54:49.271619 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.271985 kubelet[2772]: E0912 22:54:49.271961 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.271985 kubelet[2772]: W0912 22:54:49.271977 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.272163 kubelet[2772]: I0912 22:54:49.272001 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95f6\" (UniqueName: \"kubernetes.io/projected/ab10c388-eebf-432c-927b-a19629315019-kube-api-access-j95f6\") pod \"csi-node-driver-nxg4j\" (UID: \"ab10c388-eebf-432c-927b-a19629315019\") " pod="calico-system/csi-node-driver-nxg4j" Sep 12 22:54:49.272163 kubelet[2772]: E0912 22:54:49.272101 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.272431 kubelet[2772]: E0912 22:54:49.272397 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.272431 kubelet[2772]: W0912 22:54:49.272408 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.272431 kubelet[2772]: E0912 22:54:49.272424 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.272770 kubelet[2772]: E0912 22:54:49.272742 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.272770 kubelet[2772]: W0912 22:54:49.272766 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.272907 kubelet[2772]: E0912 22:54:49.272790 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.272979 kubelet[2772]: E0912 22:54:49.272954 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.272979 kubelet[2772]: W0912 22:54:49.272966 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.272979 kubelet[2772]: E0912 22:54:49.272974 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.273105 kubelet[2772]: I0912 22:54:49.273037 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ab10c388-eebf-432c-927b-a19629315019-registration-dir\") pod \"csi-node-driver-nxg4j\" (UID: \"ab10c388-eebf-432c-927b-a19629315019\") " pod="calico-system/csi-node-driver-nxg4j" Sep 12 22:54:49.273217 kubelet[2772]: E0912 22:54:49.273197 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.273217 kubelet[2772]: W0912 22:54:49.273209 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.273365 kubelet[2772]: E0912 22:54:49.273223 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.273550 kubelet[2772]: E0912 22:54:49.273484 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.273550 kubelet[2772]: W0912 22:54:49.273503 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.273550 kubelet[2772]: E0912 22:54:49.273524 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.273550 kubelet[2772]: I0912 22:54:49.273558 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ab10c388-eebf-432c-927b-a19629315019-socket-dir\") pod \"csi-node-driver-nxg4j\" (UID: \"ab10c388-eebf-432c-927b-a19629315019\") " pod="calico-system/csi-node-driver-nxg4j" Sep 12 22:54:49.273778 kubelet[2772]: E0912 22:54:49.273711 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.273778 kubelet[2772]: W0912 22:54:49.273721 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.273778 kubelet[2772]: E0912 22:54:49.273730 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.274667 kubelet[2772]: E0912 22:54:49.273975 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.274667 kubelet[2772]: W0912 22:54:49.274035 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.274667 kubelet[2772]: E0912 22:54:49.274226 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.274667 kubelet[2772]: E0912 22:54:49.274417 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.274667 kubelet[2772]: W0912 22:54:49.274457 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.274667 kubelet[2772]: E0912 22:54:49.274478 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.274667 kubelet[2772]: I0912 22:54:49.274503 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ab10c388-eebf-432c-927b-a19629315019-varrun\") pod \"csi-node-driver-nxg4j\" (UID: \"ab10c388-eebf-432c-927b-a19629315019\") " pod="calico-system/csi-node-driver-nxg4j" Sep 12 22:54:49.274858 kubelet[2772]: E0912 22:54:49.274765 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.274858 kubelet[2772]: W0912 22:54:49.274789 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.274858 kubelet[2772]: E0912 22:54:49.274815 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.275201 kubelet[2772]: E0912 22:54:49.275159 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.275201 kubelet[2772]: W0912 22:54:49.275175 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.275201 kubelet[2772]: E0912 22:54:49.275211 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.275644 kubelet[2772]: E0912 22:54:49.275581 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.275707 kubelet[2772]: W0912 22:54:49.275644 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.275707 kubelet[2772]: E0912 22:54:49.275659 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.276139 kubelet[2772]: E0912 22:54:49.276107 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.276139 kubelet[2772]: W0912 22:54:49.276137 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.276202 kubelet[2772]: E0912 22:54:49.276153 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.376899 kubelet[2772]: E0912 22:54:49.376726 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.376899 kubelet[2772]: W0912 22:54:49.376770 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.376899 kubelet[2772]: E0912 22:54:49.376809 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.378626 kubelet[2772]: E0912 22:54:49.378602 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.378626 kubelet[2772]: W0912 22:54:49.378617 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.378725 kubelet[2772]: E0912 22:54:49.378637 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.379583 kubelet[2772]: E0912 22:54:49.379312 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.379583 kubelet[2772]: W0912 22:54:49.379344 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.379583 kubelet[2772]: E0912 22:54:49.379390 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.380025 kubelet[2772]: E0912 22:54:49.380000 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.380025 kubelet[2772]: W0912 22:54:49.380017 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.380148 kubelet[2772]: E0912 22:54:49.380054 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.381006 kubelet[2772]: E0912 22:54:49.380365 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.381006 kubelet[2772]: W0912 22:54:49.380379 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.381006 kubelet[2772]: E0912 22:54:49.380518 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.381006 kubelet[2772]: E0912 22:54:49.380652 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.381006 kubelet[2772]: W0912 22:54:49.380662 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.381006 kubelet[2772]: E0912 22:54:49.380709 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.381006 kubelet[2772]: E0912 22:54:49.380869 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.381444 kubelet[2772]: W0912 22:54:49.381021 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.381444 kubelet[2772]: E0912 22:54:49.381332 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.381444 kubelet[2772]: E0912 22:54:49.381432 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.381444 kubelet[2772]: W0912 22:54:49.381444 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.381583 kubelet[2772]: E0912 22:54:49.381464 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.381743 kubelet[2772]: E0912 22:54:49.381714 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.381743 kubelet[2772]: W0912 22:54:49.381739 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.381840 kubelet[2772]: E0912 22:54:49.381758 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.382085 kubelet[2772]: E0912 22:54:49.382047 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.382085 kubelet[2772]: W0912 22:54:49.382068 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.382188 kubelet[2772]: E0912 22:54:49.382089 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.382526 kubelet[2772]: E0912 22:54:49.382506 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.382526 kubelet[2772]: W0912 22:54:49.382519 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.382648 kubelet[2772]: E0912 22:54:49.382536 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.382842 kubelet[2772]: E0912 22:54:49.382808 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.382842 kubelet[2772]: W0912 22:54:49.382822 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.383026 kubelet[2772]: E0912 22:54:49.382927 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.383026 kubelet[2772]: E0912 22:54:49.382993 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.383026 kubelet[2772]: W0912 22:54:49.383005 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.383188 kubelet[2772]: E0912 22:54:49.383167 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.383299 kubelet[2772]: E0912 22:54:49.383256 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.383299 kubelet[2772]: W0912 22:54:49.383293 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.383475 kubelet[2772]: E0912 22:54:49.383422 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.383682 kubelet[2772]: E0912 22:54:49.383663 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.383682 kubelet[2772]: W0912 22:54:49.383677 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.383802 kubelet[2772]: E0912 22:54:49.383755 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.383924 kubelet[2772]: E0912 22:54:49.383904 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.383924 kubelet[2772]: W0912 22:54:49.383918 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.384034 kubelet[2772]: E0912 22:54:49.383937 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.384213 kubelet[2772]: E0912 22:54:49.384191 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.384213 kubelet[2772]: W0912 22:54:49.384206 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.384336 kubelet[2772]: E0912 22:54:49.384219 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.384555 kubelet[2772]: E0912 22:54:49.384526 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.384555 kubelet[2772]: W0912 22:54:49.384544 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.384613 kubelet[2772]: E0912 22:54:49.384599 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.384769 kubelet[2772]: E0912 22:54:49.384748 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.384769 kubelet[2772]: W0912 22:54:49.384763 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.384949 kubelet[2772]: E0912 22:54:49.384871 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.384949 kubelet[2772]: E0912 22:54:49.384926 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.384949 kubelet[2772]: W0912 22:54:49.384937 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.385257 kubelet[2772]: E0912 22:54:49.385214 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.385257 kubelet[2772]: W0912 22:54:49.385240 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.385257 kubelet[2772]: E0912 22:54:49.385274 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.385408 kubelet[2772]: E0912 22:54:49.385288 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.385591 kubelet[2772]: E0912 22:54:49.385573 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.385634 kubelet[2772]: W0912 22:54:49.385601 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.385634 kubelet[2772]: E0912 22:54:49.385641 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.385858 kubelet[2772]: E0912 22:54:49.385844 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.385858 kubelet[2772]: W0912 22:54:49.385856 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.385929 kubelet[2772]: E0912 22:54:49.385879 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.386459 kubelet[2772]: E0912 22:54:49.386420 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.386459 kubelet[2772]: W0912 22:54:49.386432 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.386573 kubelet[2772]: E0912 22:54:49.386472 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.386888 kubelet[2772]: E0912 22:54:49.386869 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.386888 kubelet[2772]: W0912 22:54:49.386885 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.386987 kubelet[2772]: E0912 22:54:49.386896 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.399224 kubelet[2772]: E0912 22:54:49.399169 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:49.399224 kubelet[2772]: W0912 22:54:49.399202 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:49.399224 kubelet[2772]: E0912 22:54:49.399232 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:49.481927 containerd[1570]: time="2025-09-12T22:54:49.481868144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2fbt,Uid:0b1aafd4-ef70-4848-b9d2-da45001c09cc,Namespace:calico-system,Attempt:0,}" Sep 12 22:54:49.509571 containerd[1570]: time="2025-09-12T22:54:49.509485495Z" level=info msg="connecting to shim 7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6" address="unix:///run/containerd/s/0feae42623b6deae018cbf65797d0f2ec42751b191d79b1f168a8c1100215d4d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:54:49.555662 systemd[1]: Started cri-containerd-7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6.scope - libcontainer container 7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6. Sep 12 22:54:49.595741 containerd[1570]: time="2025-09-12T22:54:49.595691226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2fbt,Uid:0b1aafd4-ef70-4848-b9d2-da45001c09cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6\"" Sep 12 22:54:50.436239 kubelet[2772]: E0912 22:54:50.436144 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:54:50.581701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287435393.mount: Deactivated successfully. Sep 12 22:54:52.435936 kubelet[2772]: E0912 22:54:52.435853 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:54:52.453377 containerd[1570]: time="2025-09-12T22:54:52.453297240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:52.454318 containerd[1570]: time="2025-09-12T22:54:52.454254547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 22:54:52.455984 containerd[1570]: time="2025-09-12T22:54:52.455944449Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:52.458888 containerd[1570]: time="2025-09-12T22:54:52.458848481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:52.459450 containerd[1570]: time="2025-09-12T22:54:52.459401509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.442330549s" Sep 12 22:54:52.459450 containerd[1570]: time="2025-09-12T22:54:52.459437997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 22:54:52.460554 containerd[1570]: time="2025-09-12T22:54:52.460510591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 22:54:52.475697 containerd[1570]: time="2025-09-12T22:54:52.475639649Z" level=info msg="CreateContainer within sandbox \"a7e97f2127331f7045b85f7c805df2bc14ef8811d21991473930f081be6baa52\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 22:54:52.509305 containerd[1570]: time="2025-09-12T22:54:52.509224882Z" level=info msg="Container f2f3c1e9a708a87ed20933cfa56b4606b284438bfb3de9ce9974c1f8f58f4f78: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:54:52.549875 containerd[1570]: time="2025-09-12T22:54:52.549803133Z" level=info msg="CreateContainer within sandbox \"a7e97f2127331f7045b85f7c805df2bc14ef8811d21991473930f081be6baa52\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f2f3c1e9a708a87ed20933cfa56b4606b284438bfb3de9ce9974c1f8f58f4f78\"" Sep 12 22:54:52.550646 containerd[1570]: time="2025-09-12T22:54:52.550520319Z" level=info msg="StartContainer for \"f2f3c1e9a708a87ed20933cfa56b4606b284438bfb3de9ce9974c1f8f58f4f78\"" Sep 12 22:54:52.551645 containerd[1570]: time="2025-09-12T22:54:52.551615605Z" level=info msg="connecting to shim f2f3c1e9a708a87ed20933cfa56b4606b284438bfb3de9ce9974c1f8f58f4f78" address="unix:///run/containerd/s/b6c1e9d6682a171cc3e36424a0df1609afc9ed0239d56af1c53efdb49a0c176a" protocol=ttrpc version=3 Sep 12 22:54:52.577602 systemd[1]: Started cri-containerd-f2f3c1e9a708a87ed20933cfa56b4606b284438bfb3de9ce9974c1f8f58f4f78.scope - libcontainer container f2f3c1e9a708a87ed20933cfa56b4606b284438bfb3de9ce9974c1f8f58f4f78. Sep 12 22:54:52.643548 containerd[1570]: time="2025-09-12T22:54:52.643489173Z" level=info msg="StartContainer for \"f2f3c1e9a708a87ed20933cfa56b4606b284438bfb3de9ce9974c1f8f58f4f78\" returns successfully" Sep 12 22:54:53.518493 kubelet[2772]: E0912 22:54:53.518450 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:53.528145 kubelet[2772]: I0912 22:54:53.527996 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-658689bbc9-5tmfq" podStartSLOduration=2.084303603 podStartE2EDuration="5.527976972s" podCreationTimestamp="2025-09-12 22:54:48 +0000 UTC" firstStartedPulling="2025-09-12 22:54:49.016729068 +0000 UTC m=+20.687756398" lastFinishedPulling="2025-09-12 22:54:52.460402437 +0000 UTC m=+24.131429767" observedRunningTime="2025-09-12 22:54:53.527830728 +0000 UTC m=+25.198858078" watchObservedRunningTime="2025-09-12 22:54:53.527976972 +0000 UTC m=+25.199004302" Sep 12 22:54:53.605301 kubelet[2772]: E0912 22:54:53.605204 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.605301 kubelet[2772]: W0912 22:54:53.605236 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.605301 kubelet[2772]: E0912 22:54:53.605284 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.605631 kubelet[2772]: E0912 22:54:53.605612 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.605631 kubelet[2772]: W0912 22:54:53.605628 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.605695 kubelet[2772]: E0912 22:54:53.605640 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.605883 kubelet[2772]: E0912 22:54:53.605846 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.605883 kubelet[2772]: W0912 22:54:53.605861 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.605883 kubelet[2772]: E0912 22:54:53.605872 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.606087 kubelet[2772]: E0912 22:54:53.606070 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.606087 kubelet[2772]: W0912 22:54:53.606083 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.606149 kubelet[2772]: E0912 22:54:53.606094 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.606315 kubelet[2772]: E0912 22:54:53.606297 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.606315 kubelet[2772]: W0912 22:54:53.606312 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.606374 kubelet[2772]: E0912 22:54:53.606324 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.606544 kubelet[2772]: E0912 22:54:53.606527 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.606544 kubelet[2772]: W0912 22:54:53.606541 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.606600 kubelet[2772]: E0912 22:54:53.606552 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.606761 kubelet[2772]: E0912 22:54:53.606743 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.606761 kubelet[2772]: W0912 22:54:53.606757 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.606817 kubelet[2772]: E0912 22:54:53.606769 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.607071 kubelet[2772]: E0912 22:54:53.607038 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.607071 kubelet[2772]: W0912 22:54:53.607060 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.607071 kubelet[2772]: E0912 22:54:53.607070 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.607309 kubelet[2772]: E0912 22:54:53.607291 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.607309 kubelet[2772]: W0912 22:54:53.607306 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.607381 kubelet[2772]: E0912 22:54:53.607318 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.607556 kubelet[2772]: E0912 22:54:53.607538 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.607556 kubelet[2772]: W0912 22:54:53.607553 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.607610 kubelet[2772]: E0912 22:54:53.607563 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.607767 kubelet[2772]: E0912 22:54:53.607750 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.607767 kubelet[2772]: W0912 22:54:53.607764 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.607809 kubelet[2772]: E0912 22:54:53.607774 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.607969 kubelet[2772]: E0912 22:54:53.607954 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.607969 kubelet[2772]: W0912 22:54:53.607967 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.608037 kubelet[2772]: E0912 22:54:53.607977 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.608195 kubelet[2772]: E0912 22:54:53.608178 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.608195 kubelet[2772]: W0912 22:54:53.608192 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.608244 kubelet[2772]: E0912 22:54:53.608202 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.608433 kubelet[2772]: E0912 22:54:53.608396 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.608433 kubelet[2772]: W0912 22:54:53.608409 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.608433 kubelet[2772]: E0912 22:54:53.608419 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.608605 kubelet[2772]: E0912 22:54:53.608589 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.608605 kubelet[2772]: W0912 22:54:53.608602 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.608660 kubelet[2772]: E0912 22:54:53.608613 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.611055 kubelet[2772]: E0912 22:54:53.611022 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.611055 kubelet[2772]: W0912 22:54:53.611041 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.611055 kubelet[2772]: E0912 22:54:53.611052 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.611316 kubelet[2772]: E0912 22:54:53.611298 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.611316 kubelet[2772]: W0912 22:54:53.611312 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.611376 kubelet[2772]: E0912 22:54:53.611330 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.611736 kubelet[2772]: E0912 22:54:53.611695 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.611736 kubelet[2772]: W0912 22:54:53.611733 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.611816 kubelet[2772]: E0912 22:54:53.611765 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.611974 kubelet[2772]: E0912 22:54:53.611959 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.611974 kubelet[2772]: W0912 22:54:53.611970 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.612045 kubelet[2772]: E0912 22:54:53.611983 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.612203 kubelet[2772]: E0912 22:54:53.612171 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.612203 kubelet[2772]: W0912 22:54:53.612184 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.612203 kubelet[2772]: E0912 22:54:53.612198 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.612671 kubelet[2772]: E0912 22:54:53.612414 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.612671 kubelet[2772]: W0912 22:54:53.612423 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.612671 kubelet[2772]: E0912 22:54:53.612436 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.612795 kubelet[2772]: E0912 22:54:53.612742 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.612795 kubelet[2772]: W0912 22:54:53.612760 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.612795 kubelet[2772]: E0912 22:54:53.612788 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.613052 kubelet[2772]: E0912 22:54:53.613031 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.613052 kubelet[2772]: W0912 22:54:53.613047 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.613125 kubelet[2772]: E0912 22:54:53.613084 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.613368 kubelet[2772]: E0912 22:54:53.613341 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.613368 kubelet[2772]: W0912 22:54:53.613358 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.613487 kubelet[2772]: E0912 22:54:53.613444 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.613628 kubelet[2772]: E0912 22:54:53.613609 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.613628 kubelet[2772]: W0912 22:54:53.613624 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.613717 kubelet[2772]: E0912 22:54:53.613643 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.613914 kubelet[2772]: E0912 22:54:53.613882 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.613914 kubelet[2772]: W0912 22:54:53.613898 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.613914 kubelet[2772]: E0912 22:54:53.613918 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.614168 kubelet[2772]: E0912 22:54:53.614150 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.614168 kubelet[2772]: W0912 22:54:53.614166 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.614218 kubelet[2772]: E0912 22:54:53.614182 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.614579 kubelet[2772]: E0912 22:54:53.614547 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.614579 kubelet[2772]: W0912 22:54:53.614564 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.614684 kubelet[2772]: E0912 22:54:53.614598 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.614934 kubelet[2772]: E0912 22:54:53.614907 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.614934 kubelet[2772]: W0912 22:54:53.614923 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.615051 kubelet[2772]: E0912 22:54:53.614942 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.615228 kubelet[2772]: E0912 22:54:53.615184 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.615228 kubelet[2772]: W0912 22:54:53.615197 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.615228 kubelet[2772]: E0912 22:54:53.615226 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.615523 kubelet[2772]: E0912 22:54:53.615497 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.615523 kubelet[2772]: W0912 22:54:53.615522 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.615615 kubelet[2772]: E0912 22:54:53.615535 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.616064 kubelet[2772]: E0912 22:54:53.616031 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.616064 kubelet[2772]: W0912 22:54:53.616059 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.616184 kubelet[2772]: E0912 22:54:53.616091 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.616350 kubelet[2772]: E0912 22:54:53.616330 2772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 22:54:53.616350 kubelet[2772]: W0912 22:54:53.616342 2772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 22:54:53.616350 kubelet[2772]: E0912 22:54:53.616351 2772 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 22:54:53.867899 containerd[1570]: time="2025-09-12T22:54:53.867763815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:53.868957 containerd[1570]: time="2025-09-12T22:54:53.868911569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 22:54:53.870710 containerd[1570]: time="2025-09-12T22:54:53.870645293Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:53.872830 containerd[1570]: time="2025-09-12T22:54:53.872783827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:54:53.873543 containerd[1570]: time="2025-09-12T22:54:53.873482037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.412935479s" Sep 12 22:54:53.873543 containerd[1570]: time="2025-09-12T22:54:53.873525629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 22:54:53.875845 containerd[1570]: time="2025-09-12T22:54:53.875800139Z" level=info msg="CreateContainer within sandbox \"7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 22:54:53.886223 containerd[1570]: time="2025-09-12T22:54:53.886153652Z" level=info msg="Container 4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:54:53.898916 containerd[1570]: time="2025-09-12T22:54:53.898841688Z" level=info msg="CreateContainer within sandbox \"7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be\"" Sep 12 22:54:53.899581 containerd[1570]: time="2025-09-12T22:54:53.899529600Z" level=info msg="StartContainer for \"4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be\"" Sep 12 22:54:53.901184 containerd[1570]: time="2025-09-12T22:54:53.901151223Z" level=info msg="connecting to shim 4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be" address="unix:///run/containerd/s/0feae42623b6deae018cbf65797d0f2ec42751b191d79b1f168a8c1100215d4d" protocol=ttrpc version=3 Sep 12 22:54:53.929513 systemd[1]: Started cri-containerd-4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be.scope - libcontainer container 4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be. Sep 12 22:54:53.988764 systemd[1]: cri-containerd-4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be.scope: Deactivated successfully. Sep 12 22:54:53.990583 containerd[1570]: time="2025-09-12T22:54:53.990549945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be\" id:\"4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be\" pid:3427 exited_at:{seconds:1757717693 nanos:990042272}" Sep 12 22:54:54.007079 containerd[1570]: time="2025-09-12T22:54:54.006976077Z" level=info msg="received exit event container_id:\"4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be\" id:\"4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be\" pid:3427 exited_at:{seconds:1757717693 nanos:990042272}" Sep 12 22:54:54.011858 containerd[1570]: time="2025-09-12T22:54:54.011740870Z" level=info msg="StartContainer for \"4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be\" returns successfully" Sep 12 22:54:54.041216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4aaf9faea02136c9cfcd3bec9f4358095cb3ec80270bc1168a44523f463a83be-rootfs.mount: Deactivated successfully. Sep 12 22:54:54.437080 kubelet[2772]: E0912 22:54:54.436932 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:54:54.522794 kubelet[2772]: E0912 22:54:54.522615 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:54.523594 containerd[1570]: time="2025-09-12T22:54:54.523394204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 22:54:55.524543 kubelet[2772]: E0912 22:54:55.524456 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:54:56.436424 kubelet[2772]: E0912 22:54:56.436333 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:54:58.445403 kubelet[2772]: E0912 22:54:58.444632 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:55:00.145642 containerd[1570]: time="2025-09-12T22:55:00.145558289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:00.147165 containerd[1570]: time="2025-09-12T22:55:00.147114990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 22:55:00.151346 containerd[1570]: time="2025-09-12T22:55:00.151257253Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:00.154005 containerd[1570]: time="2025-09-12T22:55:00.153952069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:00.154771 containerd[1570]: time="2025-09-12T22:55:00.154724558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.631278748s" Sep 12 22:55:00.154771 containerd[1570]: time="2025-09-12T22:55:00.154761378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 22:55:00.157150 containerd[1570]: time="2025-09-12T22:55:00.157096380Z" level=info msg="CreateContainer within sandbox \"7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 22:55:00.170722 containerd[1570]: time="2025-09-12T22:55:00.170197222Z" level=info msg="Container 7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:00.185752 containerd[1570]: time="2025-09-12T22:55:00.185684823Z" level=info msg="CreateContainer within sandbox \"7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62\"" Sep 12 22:55:00.186405 containerd[1570]: time="2025-09-12T22:55:00.186350802Z" level=info msg="StartContainer for \"7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62\"" Sep 12 22:55:00.188470 containerd[1570]: time="2025-09-12T22:55:00.188440454Z" level=info msg="connecting to shim 7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62" address="unix:///run/containerd/s/0feae42623b6deae018cbf65797d0f2ec42751b191d79b1f168a8c1100215d4d" protocol=ttrpc version=3 Sep 12 22:55:00.225581 systemd[1]: Started cri-containerd-7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62.scope - libcontainer container 7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62. Sep 12 22:55:00.278459 containerd[1570]: time="2025-09-12T22:55:00.278373674Z" level=info msg="StartContainer for \"7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62\" returns successfully" Sep 12 22:55:00.436920 kubelet[2772]: E0912 22:55:00.436671 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:55:01.487691 systemd[1]: cri-containerd-7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62.scope: Deactivated successfully. Sep 12 22:55:01.488131 systemd[1]: cri-containerd-7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62.scope: Consumed 738ms CPU time, 182.3M memory peak, 4.1M read from disk, 171.3M written to disk. Sep 12 22:55:01.488674 containerd[1570]: time="2025-09-12T22:55:01.488630497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62\" id:\"7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62\" pid:3487 exited_at:{seconds:1757717701 nanos:488326256}" Sep 12 22:55:01.488674 containerd[1570]: time="2025-09-12T22:55:01.488632460Z" level=info msg="received exit event container_id:\"7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62\" id:\"7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62\" pid:3487 exited_at:{seconds:1757717701 nanos:488326256}" Sep 12 22:55:01.513732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c2684d549a53ef848c2bcb88c0e37e500f0b80af0fca5d8689af3c398911f62-rootfs.mount: Deactivated successfully. Sep 12 22:55:01.587542 kubelet[2772]: I0912 22:55:01.587498 2772 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 22:55:01.721244 systemd[1]: Created slice kubepods-burstable-pod7f891f46_21f6_47ed_a3cd_51bb989207a3.slice - libcontainer container kubepods-burstable-pod7f891f46_21f6_47ed_a3cd_51bb989207a3.slice. Sep 12 22:55:01.731034 systemd[1]: Created slice kubepods-besteffort-podf208b672_cedd_4dca_8408_008a6df49113.slice - libcontainer container kubepods-besteffort-podf208b672_cedd_4dca_8408_008a6df49113.slice. Sep 12 22:55:01.739365 systemd[1]: Created slice kubepods-besteffort-pod86a73699_e6f2_448d_94a2_1a063ab9c5b3.slice - libcontainer container kubepods-besteffort-pod86a73699_e6f2_448d_94a2_1a063ab9c5b3.slice. Sep 12 22:55:01.747607 systemd[1]: Created slice kubepods-burstable-podd80c8427_d9fd_47c5_9ec6_d52eec68bfb1.slice - libcontainer container kubepods-burstable-podd80c8427_d9fd_47c5_9ec6_d52eec68bfb1.slice. Sep 12 22:55:01.754706 systemd[1]: Created slice kubepods-besteffort-pod61d0fc56_9ef5_4e48_adc3_53c35a13a60c.slice - libcontainer container kubepods-besteffort-pod61d0fc56_9ef5_4e48_adc3_53c35a13a60c.slice. Sep 12 22:55:01.762572 systemd[1]: Created slice kubepods-besteffort-pod77f1da1f_7be6_435c_a995_9d53554099dc.slice - libcontainer container kubepods-besteffort-pod77f1da1f_7be6_435c_a995_9d53554099dc.slice. Sep 12 22:55:01.769288 systemd[1]: Created slice kubepods-besteffort-pod4e009d42_d75c_4538_aede_350e51b801c4.slice - libcontainer container kubepods-besteffort-pod4e009d42_d75c_4538_aede_350e51b801c4.slice. Sep 12 22:55:01.774890 kubelet[2772]: I0912 22:55:01.774819 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86a73699-e6f2-448d-94a2-1a063ab9c5b3-calico-apiserver-certs\") pod \"calico-apiserver-7cdcfcc5d6-clg8x\" (UID: \"86a73699-e6f2-448d-94a2-1a063ab9c5b3\") " pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-clg8x" Sep 12 22:55:01.775223 kubelet[2772]: I0912 22:55:01.775081 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrcq\" (UniqueName: \"kubernetes.io/projected/86a73699-e6f2-448d-94a2-1a063ab9c5b3-kube-api-access-xhrcq\") pod \"calico-apiserver-7cdcfcc5d6-clg8x\" (UID: \"86a73699-e6f2-448d-94a2-1a063ab9c5b3\") " pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-clg8x" Sep 12 22:55:01.775223 kubelet[2772]: I0912 22:55:01.775168 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/77f1da1f-7be6-435c-a995-9d53554099dc-goldmane-key-pair\") pod \"goldmane-7988f88666-49m2v\" (UID: \"77f1da1f-7be6-435c-a995-9d53554099dc\") " pod="calico-system/goldmane-7988f88666-49m2v" Sep 12 22:55:01.775440 kubelet[2772]: I0912 22:55:01.775201 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77f1da1f-7be6-435c-a995-9d53554099dc-goldmane-ca-bundle\") pod \"goldmane-7988f88666-49m2v\" (UID: \"77f1da1f-7be6-435c-a995-9d53554099dc\") " pod="calico-system/goldmane-7988f88666-49m2v" Sep 12 22:55:01.775530 kubelet[2772]: I0912 22:55:01.775419 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-backend-key-pair\") pod \"whisker-7cc689bb6c-6jv7s\" (UID: \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\") " pod="calico-system/whisker-7cc689bb6c-6jv7s" Sep 12 22:55:01.775693 kubelet[2772]: I0912 22:55:01.775672 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77f1da1f-7be6-435c-a995-9d53554099dc-config\") pod \"goldmane-7988f88666-49m2v\" (UID: \"77f1da1f-7be6-435c-a995-9d53554099dc\") " pod="calico-system/goldmane-7988f88666-49m2v" Sep 12 22:55:01.775884 kubelet[2772]: I0912 22:55:01.775838 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-ca-bundle\") pod \"whisker-7cc689bb6c-6jv7s\" (UID: \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\") " pod="calico-system/whisker-7cc689bb6c-6jv7s" Sep 12 22:55:01.776028 kubelet[2772]: I0912 22:55:01.775980 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnf5m\" (UniqueName: \"kubernetes.io/projected/77f1da1f-7be6-435c-a995-9d53554099dc-kube-api-access-mnf5m\") pod \"goldmane-7988f88666-49m2v\" (UID: \"77f1da1f-7be6-435c-a995-9d53554099dc\") " pod="calico-system/goldmane-7988f88666-49m2v" Sep 12 22:55:01.876693 kubelet[2772]: I0912 22:55:01.876479 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zh6z\" (UniqueName: \"kubernetes.io/projected/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-kube-api-access-8zh6z\") pod \"whisker-7cc689bb6c-6jv7s\" (UID: \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\") " pod="calico-system/whisker-7cc689bb6c-6jv7s" Sep 12 22:55:01.876693 kubelet[2772]: I0912 22:55:01.876569 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w9sl\" (UniqueName: \"kubernetes.io/projected/4e009d42-d75c-4538-aede-350e51b801c4-kube-api-access-9w9sl\") pod \"calico-kube-controllers-7d58869f5c-5rkng\" (UID: \"4e009d42-d75c-4538-aede-350e51b801c4\") " pod="calico-system/calico-kube-controllers-7d58869f5c-5rkng" Sep 12 22:55:01.876693 kubelet[2772]: I0912 22:55:01.876597 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f891f46-21f6-47ed-a3cd-51bb989207a3-config-volume\") pod \"coredns-7c65d6cfc9-gqvfv\" (UID: \"7f891f46-21f6-47ed-a3cd-51bb989207a3\") " pod="kube-system/coredns-7c65d6cfc9-gqvfv" Sep 12 22:55:01.876693 kubelet[2772]: I0912 22:55:01.876617 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw65b\" (UniqueName: \"kubernetes.io/projected/7f891f46-21f6-47ed-a3cd-51bb989207a3-kube-api-access-nw65b\") pod \"coredns-7c65d6cfc9-gqvfv\" (UID: \"7f891f46-21f6-47ed-a3cd-51bb989207a3\") " pod="kube-system/coredns-7c65d6cfc9-gqvfv" Sep 12 22:55:01.876994 kubelet[2772]: I0912 22:55:01.876726 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d80c8427-d9fd-47c5-9ec6-d52eec68bfb1-config-volume\") pod \"coredns-7c65d6cfc9-fbmsj\" (UID: \"d80c8427-d9fd-47c5-9ec6-d52eec68bfb1\") " pod="kube-system/coredns-7c65d6cfc9-fbmsj" Sep 12 22:55:01.876994 kubelet[2772]: I0912 22:55:01.876827 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfrmh\" (UniqueName: \"kubernetes.io/projected/d80c8427-d9fd-47c5-9ec6-d52eec68bfb1-kube-api-access-hfrmh\") pod \"coredns-7c65d6cfc9-fbmsj\" (UID: \"d80c8427-d9fd-47c5-9ec6-d52eec68bfb1\") " pod="kube-system/coredns-7c65d6cfc9-fbmsj" Sep 12 22:55:01.876994 kubelet[2772]: I0912 22:55:01.876874 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f208b672-cedd-4dca-8408-008a6df49113-calico-apiserver-certs\") pod \"calico-apiserver-7cdcfcc5d6-v9xlm\" (UID: \"f208b672-cedd-4dca-8408-008a6df49113\") " pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-v9xlm" Sep 12 22:55:01.876994 kubelet[2772]: I0912 22:55:01.876896 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e009d42-d75c-4538-aede-350e51b801c4-tigera-ca-bundle\") pod \"calico-kube-controllers-7d58869f5c-5rkng\" (UID: \"4e009d42-d75c-4538-aede-350e51b801c4\") " pod="calico-system/calico-kube-controllers-7d58869f5c-5rkng" Sep 12 22:55:01.876994 kubelet[2772]: I0912 22:55:01.876965 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wh5z\" (UniqueName: \"kubernetes.io/projected/f208b672-cedd-4dca-8408-008a6df49113-kube-api-access-7wh5z\") pod \"calico-apiserver-7cdcfcc5d6-v9xlm\" (UID: \"f208b672-cedd-4dca-8408-008a6df49113\") " pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-v9xlm" Sep 12 22:55:02.027067 kubelet[2772]: E0912 22:55:02.026999 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:02.027814 containerd[1570]: time="2025-09-12T22:55:02.027689064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gqvfv,Uid:7f891f46-21f6-47ed-a3cd-51bb989207a3,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:02.036900 containerd[1570]: time="2025-09-12T22:55:02.036842297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-v9xlm,Uid:f208b672-cedd-4dca-8408-008a6df49113,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:55:02.043419 containerd[1570]: time="2025-09-12T22:55:02.043375946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-clg8x,Uid:86a73699-e6f2-448d-94a2-1a063ab9c5b3,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:55:02.052005 kubelet[2772]: E0912 22:55:02.051949 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:02.052838 containerd[1570]: time="2025-09-12T22:55:02.052735408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fbmsj,Uid:d80c8427-d9fd-47c5-9ec6-d52eec68bfb1,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:02.060305 containerd[1570]: time="2025-09-12T22:55:02.060193802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc689bb6c-6jv7s,Uid:61d0fc56-9ef5-4e48-adc3-53c35a13a60c,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:02.067354 containerd[1570]: time="2025-09-12T22:55:02.067223100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-49m2v,Uid:77f1da1f-7be6-435c-a995-9d53554099dc,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:02.074450 containerd[1570]: time="2025-09-12T22:55:02.074389166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58869f5c-5rkng,Uid:4e009d42-d75c-4538-aede-350e51b801c4,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:02.296123 containerd[1570]: time="2025-09-12T22:55:02.295690099Z" level=error msg="Failed to destroy network for sandbox \"843fbe6b5b87a08b76732049696556a64f7d61eca51ac70708a8a2852311d091\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.312473 containerd[1570]: time="2025-09-12T22:55:02.312394360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gqvfv,Uid:7f891f46-21f6-47ed-a3cd-51bb989207a3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"843fbe6b5b87a08b76732049696556a64f7d61eca51ac70708a8a2852311d091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.313840 kubelet[2772]: E0912 22:55:02.313730 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843fbe6b5b87a08b76732049696556a64f7d61eca51ac70708a8a2852311d091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.313964 kubelet[2772]: E0912 22:55:02.313904 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843fbe6b5b87a08b76732049696556a64f7d61eca51ac70708a8a2852311d091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gqvfv" Sep 12 22:55:02.314025 kubelet[2772]: E0912 22:55:02.313940 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843fbe6b5b87a08b76732049696556a64f7d61eca51ac70708a8a2852311d091\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gqvfv" Sep 12 22:55:02.314428 kubelet[2772]: E0912 22:55:02.314099 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gqvfv_kube-system(7f891f46-21f6-47ed-a3cd-51bb989207a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gqvfv_kube-system(7f891f46-21f6-47ed-a3cd-51bb989207a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"843fbe6b5b87a08b76732049696556a64f7d61eca51ac70708a8a2852311d091\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gqvfv" podUID="7f891f46-21f6-47ed-a3cd-51bb989207a3" Sep 12 22:55:02.393209 containerd[1570]: time="2025-09-12T22:55:02.392324031Z" level=error msg="Failed to destroy network for sandbox \"08644710c44408fcb64d0836cf0c6108799c30ae74a5a26b0d5db16dea98dc58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.395097 containerd[1570]: time="2025-09-12T22:55:02.394844089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58869f5c-5rkng,Uid:4e009d42-d75c-4538-aede-350e51b801c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08644710c44408fcb64d0836cf0c6108799c30ae74a5a26b0d5db16dea98dc58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.395880 kubelet[2772]: E0912 22:55:02.395812 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08644710c44408fcb64d0836cf0c6108799c30ae74a5a26b0d5db16dea98dc58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.396072 kubelet[2772]: E0912 22:55:02.395979 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08644710c44408fcb64d0836cf0c6108799c30ae74a5a26b0d5db16dea98dc58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d58869f5c-5rkng" Sep 12 22:55:02.396342 kubelet[2772]: E0912 22:55:02.396005 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08644710c44408fcb64d0836cf0c6108799c30ae74a5a26b0d5db16dea98dc58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d58869f5c-5rkng" Sep 12 22:55:02.396491 kubelet[2772]: E0912 22:55:02.396444 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d58869f5c-5rkng_calico-system(4e009d42-d75c-4538-aede-350e51b801c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d58869f5c-5rkng_calico-system(4e009d42-d75c-4538-aede-350e51b801c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08644710c44408fcb64d0836cf0c6108799c30ae74a5a26b0d5db16dea98dc58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d58869f5c-5rkng" podUID="4e009d42-d75c-4538-aede-350e51b801c4" Sep 12 22:55:02.411902 containerd[1570]: time="2025-09-12T22:55:02.411840700Z" level=error msg="Failed to destroy network for sandbox \"65099fb70e411b596dc7cbd8a0821524206267cb38b2e15cfc63be6294367ef9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.413430 containerd[1570]: time="2025-09-12T22:55:02.413345374Z" level=error msg="Failed to destroy network for sandbox \"4628bd9abccb21c55f5221fd71ba0d7578b4a20e0ae0017f8f4654bd5a308c8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.414304 containerd[1570]: time="2025-09-12T22:55:02.414243077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fbmsj,Uid:d80c8427-d9fd-47c5-9ec6-d52eec68bfb1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65099fb70e411b596dc7cbd8a0821524206267cb38b2e15cfc63be6294367ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.415065 kubelet[2772]: E0912 22:55:02.414930 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65099fb70e411b596dc7cbd8a0821524206267cb38b2e15cfc63be6294367ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.415065 kubelet[2772]: E0912 22:55:02.415020 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65099fb70e411b596dc7cbd8a0821524206267cb38b2e15cfc63be6294367ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fbmsj" Sep 12 22:55:02.415065 kubelet[2772]: E0912 22:55:02.415052 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65099fb70e411b596dc7cbd8a0821524206267cb38b2e15cfc63be6294367ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fbmsj" Sep 12 22:55:02.415256 kubelet[2772]: E0912 22:55:02.415115 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fbmsj_kube-system(d80c8427-d9fd-47c5-9ec6-d52eec68bfb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fbmsj_kube-system(d80c8427-d9fd-47c5-9ec6-d52eec68bfb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65099fb70e411b596dc7cbd8a0821524206267cb38b2e15cfc63be6294367ef9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fbmsj" podUID="d80c8427-d9fd-47c5-9ec6-d52eec68bfb1" Sep 12 22:55:02.417040 containerd[1570]: time="2025-09-12T22:55:02.415932618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-clg8x,Uid:86a73699-e6f2-448d-94a2-1a063ab9c5b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4628bd9abccb21c55f5221fd71ba0d7578b4a20e0ae0017f8f4654bd5a308c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.417536 containerd[1570]: time="2025-09-12T22:55:02.416912035Z" level=error msg="Failed to destroy network for sandbox \"eea93cc4c6e258337cf64469d89391d5895c64dbd83ea6e1b876dfac48b39390\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.418499 kubelet[2772]: E0912 22:55:02.418446 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4628bd9abccb21c55f5221fd71ba0d7578b4a20e0ae0017f8f4654bd5a308c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.418567 kubelet[2772]: E0912 22:55:02.418503 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4628bd9abccb21c55f5221fd71ba0d7578b4a20e0ae0017f8f4654bd5a308c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-clg8x" Sep 12 22:55:02.418567 kubelet[2772]: E0912 22:55:02.418528 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4628bd9abccb21c55f5221fd71ba0d7578b4a20e0ae0017f8f4654bd5a308c8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-clg8x" Sep 12 22:55:02.418630 kubelet[2772]: E0912 22:55:02.418583 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cdcfcc5d6-clg8x_calico-apiserver(86a73699-e6f2-448d-94a2-1a063ab9c5b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cdcfcc5d6-clg8x_calico-apiserver(86a73699-e6f2-448d-94a2-1a063ab9c5b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4628bd9abccb21c55f5221fd71ba0d7578b4a20e0ae0017f8f4654bd5a308c8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-clg8x" podUID="86a73699-e6f2-448d-94a2-1a063ab9c5b3" Sep 12 22:55:02.421427 containerd[1570]: time="2025-09-12T22:55:02.421371312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-v9xlm,Uid:f208b672-cedd-4dca-8408-008a6df49113,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea93cc4c6e258337cf64469d89391d5895c64dbd83ea6e1b876dfac48b39390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.422909 kubelet[2772]: E0912 22:55:02.422230 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea93cc4c6e258337cf64469d89391d5895c64dbd83ea6e1b876dfac48b39390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.422909 kubelet[2772]: E0912 22:55:02.422342 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea93cc4c6e258337cf64469d89391d5895c64dbd83ea6e1b876dfac48b39390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-v9xlm" Sep 12 22:55:02.422909 kubelet[2772]: E0912 22:55:02.422370 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea93cc4c6e258337cf64469d89391d5895c64dbd83ea6e1b876dfac48b39390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-v9xlm" Sep 12 22:55:02.423071 kubelet[2772]: E0912 22:55:02.422482 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cdcfcc5d6-v9xlm_calico-apiserver(f208b672-cedd-4dca-8408-008a6df49113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cdcfcc5d6-v9xlm_calico-apiserver(f208b672-cedd-4dca-8408-008a6df49113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eea93cc4c6e258337cf64469d89391d5895c64dbd83ea6e1b876dfac48b39390\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-v9xlm" podUID="f208b672-cedd-4dca-8408-008a6df49113" Sep 12 22:55:02.425091 containerd[1570]: time="2025-09-12T22:55:02.424938905Z" level=error msg="Failed to destroy network for sandbox \"67b3f7db107bc0f60f54c8c044d1e6c0f770e8bea6770126d403922a0fc9c3a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.426424 containerd[1570]: time="2025-09-12T22:55:02.426350625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-49m2v,Uid:77f1da1f-7be6-435c-a995-9d53554099dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67b3f7db107bc0f60f54c8c044d1e6c0f770e8bea6770126d403922a0fc9c3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.427611 kubelet[2772]: E0912 22:55:02.427490 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67b3f7db107bc0f60f54c8c044d1e6c0f770e8bea6770126d403922a0fc9c3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.427792 kubelet[2772]: E0912 22:55:02.427711 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67b3f7db107bc0f60f54c8c044d1e6c0f770e8bea6770126d403922a0fc9c3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-49m2v" Sep 12 22:55:02.427792 kubelet[2772]: E0912 22:55:02.427737 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67b3f7db107bc0f60f54c8c044d1e6c0f770e8bea6770126d403922a0fc9c3a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-49m2v" Sep 12 22:55:02.427992 kubelet[2772]: E0912 22:55:02.427946 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-49m2v_calico-system(77f1da1f-7be6-435c-a995-9d53554099dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-49m2v_calico-system(77f1da1f-7be6-435c-a995-9d53554099dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67b3f7db107bc0f60f54c8c044d1e6c0f770e8bea6770126d403922a0fc9c3a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-49m2v" podUID="77f1da1f-7be6-435c-a995-9d53554099dc" Sep 12 22:55:02.434826 containerd[1570]: time="2025-09-12T22:55:02.434760343Z" level=error msg="Failed to destroy network for sandbox \"148390cf6eaff1fe39f20d58e2fccd2799709c3d60bbf3e4f72876ae3701c8f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.436339 containerd[1570]: time="2025-09-12T22:55:02.436227497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cc689bb6c-6jv7s,Uid:61d0fc56-9ef5-4e48-adc3-53c35a13a60c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"148390cf6eaff1fe39f20d58e2fccd2799709c3d60bbf3e4f72876ae3701c8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.436998 kubelet[2772]: E0912 22:55:02.436934 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148390cf6eaff1fe39f20d58e2fccd2799709c3d60bbf3e4f72876ae3701c8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.436998 kubelet[2772]: E0912 22:55:02.436990 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148390cf6eaff1fe39f20d58e2fccd2799709c3d60bbf3e4f72876ae3701c8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cc689bb6c-6jv7s" Sep 12 22:55:02.437143 kubelet[2772]: E0912 22:55:02.437013 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148390cf6eaff1fe39f20d58e2fccd2799709c3d60bbf3e4f72876ae3701c8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cc689bb6c-6jv7s" Sep 12 22:55:02.437143 kubelet[2772]: E0912 22:55:02.437054 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cc689bb6c-6jv7s_calico-system(61d0fc56-9ef5-4e48-adc3-53c35a13a60c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cc689bb6c-6jv7s_calico-system(61d0fc56-9ef5-4e48-adc3-53c35a13a60c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"148390cf6eaff1fe39f20d58e2fccd2799709c3d60bbf3e4f72876ae3701c8f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cc689bb6c-6jv7s" podUID="61d0fc56-9ef5-4e48-adc3-53c35a13a60c" Sep 12 22:55:02.447088 systemd[1]: Created slice kubepods-besteffort-podab10c388_eebf_432c_927b_a19629315019.slice - libcontainer container kubepods-besteffort-podab10c388_eebf_432c_927b_a19629315019.slice. Sep 12 22:55:02.450490 containerd[1570]: time="2025-09-12T22:55:02.450447958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxg4j,Uid:ab10c388-eebf-432c-927b-a19629315019,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:02.512030 containerd[1570]: time="2025-09-12T22:55:02.511860255Z" level=error msg="Failed to destroy network for sandbox \"c155e4372b0c2cb973fc8d76a65d4104cd9eb91d7759c9d6ca54d11953dcb022\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.517316 containerd[1570]: time="2025-09-12T22:55:02.517256760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxg4j,Uid:ab10c388-eebf-432c-927b-a19629315019,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c155e4372b0c2cb973fc8d76a65d4104cd9eb91d7759c9d6ca54d11953dcb022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.517775 kubelet[2772]: E0912 22:55:02.517701 2772 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c155e4372b0c2cb973fc8d76a65d4104cd9eb91d7759c9d6ca54d11953dcb022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 22:55:02.517854 kubelet[2772]: E0912 22:55:02.517809 2772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c155e4372b0c2cb973fc8d76a65d4104cd9eb91d7759c9d6ca54d11953dcb022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxg4j" Sep 12 22:55:02.517854 kubelet[2772]: E0912 22:55:02.517831 2772 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c155e4372b0c2cb973fc8d76a65d4104cd9eb91d7759c9d6ca54d11953dcb022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nxg4j" Sep 12 22:55:02.517933 kubelet[2772]: E0912 22:55:02.517876 2772 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nxg4j_calico-system(ab10c388-eebf-432c-927b-a19629315019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nxg4j_calico-system(ab10c388-eebf-432c-927b-a19629315019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c155e4372b0c2cb973fc8d76a65d4104cd9eb91d7759c9d6ca54d11953dcb022\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nxg4j" podUID="ab10c388-eebf-432c-927b-a19629315019" Sep 12 22:55:02.550178 containerd[1570]: time="2025-09-12T22:55:02.549715251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 22:55:09.907415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399679991.mount: Deactivated successfully. Sep 12 22:55:11.521879 kubelet[2772]: E0912 22:55:11.521805 2772 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.086s" Sep 12 22:55:11.568514 containerd[1570]: time="2025-09-12T22:55:11.568428668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:11.572094 containerd[1570]: time="2025-09-12T22:55:11.571993388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 22:55:11.583563 containerd[1570]: time="2025-09-12T22:55:11.583490614Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:11.584220 containerd[1570]: time="2025-09-12T22:55:11.584181692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.034389716s" Sep 12 22:55:11.584280 containerd[1570]: time="2025-09-12T22:55:11.584220317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 22:55:11.584799 containerd[1570]: time="2025-09-12T22:55:11.584757387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:11.594207 containerd[1570]: time="2025-09-12T22:55:11.594150250Z" level=info msg="CreateContainer within sandbox \"7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 22:55:11.620297 containerd[1570]: time="2025-09-12T22:55:11.618826317Z" level=info msg="Container 7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:11.633110 containerd[1570]: time="2025-09-12T22:55:11.633042127Z" level=info msg="CreateContainer within sandbox \"7f5d99d17c56cf216b31ba8b04c8cbff3e6dd6c0938ee68a227c3711ec1b3ea6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6\"" Sep 12 22:55:11.633996 containerd[1570]: time="2025-09-12T22:55:11.633702715Z" level=info msg="StartContainer for \"7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6\"" Sep 12 22:55:11.635662 containerd[1570]: time="2025-09-12T22:55:11.635612172Z" level=info msg="connecting to shim 7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6" address="unix:///run/containerd/s/0feae42623b6deae018cbf65797d0f2ec42751b191d79b1f168a8c1100215d4d" protocol=ttrpc version=3 Sep 12 22:55:11.667638 systemd[1]: Started cri-containerd-7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6.scope - libcontainer container 7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6. Sep 12 22:55:11.734578 containerd[1570]: time="2025-09-12T22:55:11.734500011Z" level=info msg="StartContainer for \"7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6\" returns successfully" Sep 12 22:55:11.814107 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 22:55:11.815524 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 22:55:11.943135 kubelet[2772]: I0912 22:55:11.942585 2772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-ca-bundle\") pod \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\" (UID: \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\") " Sep 12 22:55:11.943135 kubelet[2772]: I0912 22:55:11.942634 2772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zh6z\" (UniqueName: \"kubernetes.io/projected/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-kube-api-access-8zh6z\") pod \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\" (UID: \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\") " Sep 12 22:55:11.943135 kubelet[2772]: I0912 22:55:11.942667 2772 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-backend-key-pair\") pod \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\" (UID: \"61d0fc56-9ef5-4e48-adc3-53c35a13a60c\") " Sep 12 22:55:11.944249 kubelet[2772]: I0912 22:55:11.943946 2772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "61d0fc56-9ef5-4e48-adc3-53c35a13a60c" (UID: "61d0fc56-9ef5-4e48-adc3-53c35a13a60c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 22:55:11.949382 kubelet[2772]: I0912 22:55:11.949297 2772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-kube-api-access-8zh6z" (OuterVolumeSpecName: "kube-api-access-8zh6z") pod "61d0fc56-9ef5-4e48-adc3-53c35a13a60c" (UID: "61d0fc56-9ef5-4e48-adc3-53c35a13a60c"). InnerVolumeSpecName "kube-api-access-8zh6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 22:55:11.950076 kubelet[2772]: I0912 22:55:11.949954 2772 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "61d0fc56-9ef5-4e48-adc3-53c35a13a60c" (UID: "61d0fc56-9ef5-4e48-adc3-53c35a13a60c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 22:55:12.043502 kubelet[2772]: I0912 22:55:12.043354 2772 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 22:55:12.043502 kubelet[2772]: I0912 22:55:12.043497 2772 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zh6z\" (UniqueName: \"kubernetes.io/projected/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-kube-api-access-8zh6z\") on node \"localhost\" DevicePath \"\"" Sep 12 22:55:12.043502 kubelet[2772]: I0912 22:55:12.043512 2772 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61d0fc56-9ef5-4e48-adc3-53c35a13a60c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 22:55:12.445552 systemd[1]: Removed slice kubepods-besteffort-pod61d0fc56_9ef5_4e48_adc3_53c35a13a60c.slice - libcontainer container kubepods-besteffort-pod61d0fc56_9ef5_4e48_adc3_53c35a13a60c.slice. Sep 12 22:55:12.592032 systemd[1]: var-lib-kubelet-pods-61d0fc56\x2d9ef5\x2d4e48\x2dadc3\x2d53c35a13a60c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zh6z.mount: Deactivated successfully. Sep 12 22:55:12.592193 systemd[1]: var-lib-kubelet-pods-61d0fc56\x2d9ef5\x2d4e48\x2dadc3\x2d53c35a13a60c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 22:55:12.614650 kubelet[2772]: I0912 22:55:12.614559 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j2fbt" podStartSLOduration=2.62719245 podStartE2EDuration="24.614537475s" podCreationTimestamp="2025-09-12 22:54:48 +0000 UTC" firstStartedPulling="2025-09-12 22:54:49.597523457 +0000 UTC m=+21.268550787" lastFinishedPulling="2025-09-12 22:55:11.584868472 +0000 UTC m=+43.255895812" observedRunningTime="2025-09-12 22:55:12.605016812 +0000 UTC m=+44.276044152" watchObservedRunningTime="2025-09-12 22:55:12.614537475 +0000 UTC m=+44.285564805" Sep 12 22:55:12.655286 systemd[1]: Created slice kubepods-besteffort-pod38c39c19_e3f3_43e1_971d_0c3c5e52b302.slice - libcontainer container kubepods-besteffort-pod38c39c19_e3f3_43e1_971d_0c3c5e52b302.slice. Sep 12 22:55:12.748875 kubelet[2772]: I0912 22:55:12.748645 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38c39c19-e3f3-43e1-971d-0c3c5e52b302-whisker-backend-key-pair\") pod \"whisker-bd89899db-787v5\" (UID: \"38c39c19-e3f3-43e1-971d-0c3c5e52b302\") " pod="calico-system/whisker-bd89899db-787v5" Sep 12 22:55:12.748875 kubelet[2772]: I0912 22:55:12.748714 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38c39c19-e3f3-43e1-971d-0c3c5e52b302-whisker-ca-bundle\") pod \"whisker-bd89899db-787v5\" (UID: \"38c39c19-e3f3-43e1-971d-0c3c5e52b302\") " pod="calico-system/whisker-bd89899db-787v5" Sep 12 22:55:12.748875 kubelet[2772]: I0912 22:55:12.748771 2772 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdqss\" (UniqueName: \"kubernetes.io/projected/38c39c19-e3f3-43e1-971d-0c3c5e52b302-kube-api-access-rdqss\") pod \"whisker-bd89899db-787v5\" (UID: \"38c39c19-e3f3-43e1-971d-0c3c5e52b302\") " pod="calico-system/whisker-bd89899db-787v5" Sep 12 22:55:12.960824 containerd[1570]: time="2025-09-12T22:55:12.960756731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd89899db-787v5,Uid:38c39c19-e3f3-43e1-971d-0c3c5e52b302,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:13.112570 systemd-networkd[1471]: cali2267d195540: Link UP Sep 12 22:55:13.112865 systemd-networkd[1471]: cali2267d195540: Gained carrier Sep 12 22:55:13.141003 containerd[1570]: 2025-09-12 22:55:12.985 [INFO][3863] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 22:55:13.141003 containerd[1570]: 2025-09-12 22:55:13.003 [INFO][3863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bd89899db--787v5-eth0 whisker-bd89899db- calico-system 38c39c19-e3f3-43e1-971d-0c3c5e52b302 901 0 2025-09-12 22:55:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bd89899db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bd89899db-787v5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2267d195540 [] [] }} ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-" Sep 12 22:55:13.141003 containerd[1570]: 2025-09-12 22:55:13.003 [INFO][3863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-eth0" Sep 12 22:55:13.141003 containerd[1570]: 2025-09-12 22:55:13.067 [INFO][3876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" HandleID="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Workload="localhost-k8s-whisker--bd89899db--787v5-eth0" Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.068 [INFO][3876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" HandleID="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Workload="localhost-k8s-whisker--bd89899db--787v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bd89899db-787v5", "timestamp":"2025-09-12 22:55:13.067771186 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.068 [INFO][3876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.068 [INFO][3876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.069 [INFO][3876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.076 [INFO][3876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" host="localhost" Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.082 [INFO][3876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.087 [INFO][3876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.089 [INFO][3876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.091 [INFO][3876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:13.141331 containerd[1570]: 2025-09-12 22:55:13.091 [INFO][3876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" host="localhost" Sep 12 22:55:13.141637 containerd[1570]: 2025-09-12 22:55:13.092 [INFO][3876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0 Sep 12 22:55:13.141637 containerd[1570]: 2025-09-12 22:55:13.095 [INFO][3876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" host="localhost" Sep 12 22:55:13.141637 containerd[1570]: 2025-09-12 22:55:13.101 [INFO][3876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" host="localhost" Sep 12 22:55:13.141637 containerd[1570]: 2025-09-12 22:55:13.101 [INFO][3876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" host="localhost" Sep 12 22:55:13.141637 containerd[1570]: 2025-09-12 22:55:13.101 [INFO][3876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:13.141637 containerd[1570]: 2025-09-12 22:55:13.101 [INFO][3876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" HandleID="k8s-pod-network.a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Workload="localhost-k8s-whisker--bd89899db--787v5-eth0" Sep 12 22:55:13.141804 containerd[1570]: 2025-09-12 22:55:13.104 [INFO][3863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bd89899db--787v5-eth0", GenerateName:"whisker-bd89899db-", Namespace:"calico-system", SelfLink:"", UID:"38c39c19-e3f3-43e1-971d-0c3c5e52b302", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 55, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd89899db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bd89899db-787v5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2267d195540", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:13.141804 containerd[1570]: 2025-09-12 22:55:13.104 [INFO][3863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-eth0" Sep 12 22:55:13.141929 containerd[1570]: 2025-09-12 22:55:13.104 [INFO][3863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2267d195540 ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-eth0" Sep 12 22:55:13.141929 containerd[1570]: 2025-09-12 22:55:13.113 [INFO][3863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-eth0" Sep 12 22:55:13.142000 containerd[1570]: 2025-09-12 22:55:13.113 [INFO][3863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bd89899db--787v5-eth0", GenerateName:"whisker-bd89899db-", Namespace:"calico-system", SelfLink:"", UID:"38c39c19-e3f3-43e1-971d-0c3c5e52b302", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 55, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd89899db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0", Pod:"whisker-bd89899db-787v5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2267d195540", MAC:"e6:9c:b0:af:54:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:13.142069 containerd[1570]: 2025-09-12 22:55:13.132 [INFO][3863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" Namespace="calico-system" Pod="whisker-bd89899db-787v5" WorkloadEndpoint="localhost-k8s-whisker--bd89899db--787v5-eth0" Sep 12 22:55:13.437454 containerd[1570]: time="2025-09-12T22:55:13.437288815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58869f5c-5rkng,Uid:4e009d42-d75c-4538-aede-350e51b801c4,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:13.639762 systemd-networkd[1471]: calic8d8e24461e: Link UP Sep 12 22:55:13.640007 systemd-networkd[1471]: calic8d8e24461e: Gained carrier Sep 12 22:55:13.656964 containerd[1570]: 2025-09-12 22:55:13.539 [INFO][4024] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0 calico-kube-controllers-7d58869f5c- calico-system 4e009d42-d75c-4538-aede-350e51b801c4 831 0 2025-09-12 22:54:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d58869f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d58869f5c-5rkng eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic8d8e24461e [] [] }} ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-" Sep 12 22:55:13.656964 containerd[1570]: 2025-09-12 22:55:13.540 [INFO][4024] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" Sep 12 22:55:13.656964 containerd[1570]: 2025-09-12 22:55:13.586 [INFO][4038] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" HandleID="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Workload="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.586 [INFO][4038] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" HandleID="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Workload="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d58869f5c-5rkng", "timestamp":"2025-09-12 22:55:13.586027422 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.586 [INFO][4038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.586 [INFO][4038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.586 [INFO][4038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.598 [INFO][4038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" host="localhost" Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.603 [INFO][4038] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.611 [INFO][4038] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.613 [INFO][4038] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.615 [INFO][4038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:13.657306 containerd[1570]: 2025-09-12 22:55:13.615 [INFO][4038] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" host="localhost" Sep 12 22:55:13.657621 containerd[1570]: 2025-09-12 22:55:13.617 [INFO][4038] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048 Sep 12 22:55:13.657621 containerd[1570]: 2025-09-12 22:55:13.623 [INFO][4038] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" host="localhost" Sep 12 22:55:13.657621 containerd[1570]: 2025-09-12 22:55:13.630 [INFO][4038] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" host="localhost" Sep 12 22:55:13.657621 containerd[1570]: 2025-09-12 22:55:13.630 [INFO][4038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" host="localhost" Sep 12 22:55:13.657621 containerd[1570]: 2025-09-12 22:55:13.630 [INFO][4038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:13.657621 containerd[1570]: 2025-09-12 22:55:13.630 [INFO][4038] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" HandleID="k8s-pod-network.a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Workload="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" Sep 12 22:55:13.657803 containerd[1570]: 2025-09-12 22:55:13.635 [INFO][4024] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0", GenerateName:"calico-kube-controllers-7d58869f5c-", Namespace:"calico-system", SelfLink:"", UID:"4e009d42-d75c-4538-aede-350e51b801c4", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d58869f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d58869f5c-5rkng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8d8e24461e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:13.657883 containerd[1570]: 2025-09-12 22:55:13.635 [INFO][4024] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" Sep 12 22:55:13.657883 containerd[1570]: 2025-09-12 22:55:13.635 [INFO][4024] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8d8e24461e ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" Sep 12 22:55:13.657883 containerd[1570]: 2025-09-12 22:55:13.638 [INFO][4024] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" Sep 12 22:55:13.658006 containerd[1570]: 2025-09-12 22:55:13.639 [INFO][4024] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0", GenerateName:"calico-kube-controllers-7d58869f5c-", Namespace:"calico-system", SelfLink:"", UID:"4e009d42-d75c-4538-aede-350e51b801c4", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d58869f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048", Pod:"calico-kube-controllers-7d58869f5c-5rkng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic8d8e24461e", MAC:"2a:a2:ee:44:90:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:13.658086 containerd[1570]: 2025-09-12 22:55:13.652 [INFO][4024] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" Namespace="calico-system" Pod="calico-kube-controllers-7d58869f5c-5rkng" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58869f5c--5rkng-eth0" Sep 12 22:55:13.709479 containerd[1570]: time="2025-09-12T22:55:13.708087863Z" level=info msg="connecting to shim a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0" address="unix:///run/containerd/s/6b0f379fe054d55929acb1dad6b25e37b85462e7300a1a2cc74b0d50d25e20a3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:13.718148 systemd-networkd[1471]: vxlan.calico: Link UP Sep 12 22:55:13.718162 systemd-networkd[1471]: vxlan.calico: Gained carrier Sep 12 22:55:13.744535 containerd[1570]: time="2025-09-12T22:55:13.744489160Z" level=info msg="connecting to shim a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048" address="unix:///run/containerd/s/9a0562a94248533923bcb2c33472eda3be9f50661bc983b27d02c824e7988487" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:13.776440 systemd[1]: Started cri-containerd-a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0.scope - libcontainer container a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0. Sep 12 22:55:13.804166 systemd[1]: Started cri-containerd-a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048.scope - libcontainer container a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048. Sep 12 22:55:13.811321 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:13.821704 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:13.980656 containerd[1570]: time="2025-09-12T22:55:13.980491420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd89899db-787v5,Uid:38c39c19-e3f3-43e1-971d-0c3c5e52b302,Namespace:calico-system,Attempt:0,} returns sandbox id \"a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0\"" Sep 12 22:55:13.982645 containerd[1570]: time="2025-09-12T22:55:13.982469464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 22:55:14.013856 containerd[1570]: time="2025-09-12T22:55:14.013790533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58869f5c-5rkng,Uid:4e009d42-d75c-4538-aede-350e51b801c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048\"" Sep 12 22:55:14.148416 systemd-networkd[1471]: cali2267d195540: Gained IPv6LL Sep 12 22:55:14.437301 containerd[1570]: time="2025-09-12T22:55:14.436982366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-clg8x,Uid:86a73699-e6f2-448d-94a2-1a063ab9c5b3,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:55:14.438965 kubelet[2772]: I0912 22:55:14.438884 2772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d0fc56-9ef5-4e48-adc3-53c35a13a60c" path="/var/lib/kubelet/pods/61d0fc56-9ef5-4e48-adc3-53c35a13a60c/volumes" Sep 12 22:55:14.554349 systemd-networkd[1471]: cali07464103bb1: Link UP Sep 12 22:55:14.555189 systemd-networkd[1471]: cali07464103bb1: Gained carrier Sep 12 22:55:14.573695 containerd[1570]: 2025-09-12 22:55:14.485 [INFO][4218] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0 calico-apiserver-7cdcfcc5d6- calico-apiserver 86a73699-e6f2-448d-94a2-1a063ab9c5b3 830 0 2025-09-12 22:54:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cdcfcc5d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cdcfcc5d6-clg8x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07464103bb1 [] [] }} ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-" Sep 12 22:55:14.573695 containerd[1570]: 2025-09-12 22:55:14.486 [INFO][4218] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" Sep 12 22:55:14.573695 containerd[1570]: 2025-09-12 22:55:14.514 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" HandleID="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Workload="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.514 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" HandleID="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Workload="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cdcfcc5d6-clg8x", "timestamp":"2025-09-12 22:55:14.514526927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.514 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.514 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.515 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.522 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" host="localhost" Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.527 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.531 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.533 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.535 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:14.573955 containerd[1570]: 2025-09-12 22:55:14.536 [INFO][4233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" host="localhost" Sep 12 22:55:14.574190 containerd[1570]: 2025-09-12 22:55:14.537 [INFO][4233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1 Sep 12 22:55:14.574190 containerd[1570]: 2025-09-12 22:55:14.541 [INFO][4233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" host="localhost" Sep 12 22:55:14.574190 containerd[1570]: 2025-09-12 22:55:14.547 [INFO][4233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" host="localhost" Sep 12 22:55:14.574190 containerd[1570]: 2025-09-12 22:55:14.547 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" host="localhost" Sep 12 22:55:14.574190 containerd[1570]: 2025-09-12 22:55:14.547 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:14.574190 containerd[1570]: 2025-09-12 22:55:14.547 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" HandleID="k8s-pod-network.1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Workload="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" Sep 12 22:55:14.574675 containerd[1570]: 2025-09-12 22:55:14.551 [INFO][4218] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0", GenerateName:"calico-apiserver-7cdcfcc5d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"86a73699-e6f2-448d-94a2-1a063ab9c5b3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdcfcc5d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cdcfcc5d6-clg8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07464103bb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:14.574770 containerd[1570]: 2025-09-12 22:55:14.551 [INFO][4218] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" Sep 12 22:55:14.574770 containerd[1570]: 2025-09-12 22:55:14.551 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07464103bb1 ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" Sep 12 22:55:14.574770 containerd[1570]: 2025-09-12 22:55:14.555 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" Sep 12 22:55:14.574923 containerd[1570]: 2025-09-12 22:55:14.555 [INFO][4218] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0", GenerateName:"calico-apiserver-7cdcfcc5d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"86a73699-e6f2-448d-94a2-1a063ab9c5b3", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdcfcc5d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1", Pod:"calico-apiserver-7cdcfcc5d6-clg8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07464103bb1", MAC:"5e:3c:29:ef:de:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:14.574994 containerd[1570]: 2025-09-12 22:55:14.568 [INFO][4218] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-clg8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--clg8x-eth0" Sep 12 22:55:14.613568 containerd[1570]: time="2025-09-12T22:55:14.613513031Z" level=info msg="connecting to shim 1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1" address="unix:///run/containerd/s/de56b504b086eb1592ed50731297d6c95138bd67378893fee18a18755db0de68" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:14.649489 systemd[1]: Started cri-containerd-1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1.scope - libcontainer container 1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1. Sep 12 22:55:14.666215 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:14.703370 containerd[1570]: time="2025-09-12T22:55:14.703189718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-clg8x,Uid:86a73699-e6f2-448d-94a2-1a063ab9c5b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1\"" Sep 12 22:55:14.788511 systemd-networkd[1471]: calic8d8e24461e: Gained IPv6LL Sep 12 22:55:15.364539 systemd-networkd[1471]: vxlan.calico: Gained IPv6LL Sep 12 22:55:15.437448 containerd[1570]: time="2025-09-12T22:55:15.437347558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxg4j,Uid:ab10c388-eebf-432c-927b-a19629315019,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:15.940618 systemd-networkd[1471]: cali07464103bb1: Gained IPv6LL Sep 12 22:55:16.109197 systemd-networkd[1471]: cali39d27931be7: Link UP Sep 12 22:55:16.109803 systemd-networkd[1471]: cali39d27931be7: Gained carrier Sep 12 22:55:16.184710 containerd[1570]: 2025-09-12 22:55:15.986 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nxg4j-eth0 csi-node-driver- calico-system ab10c388-eebf-432c-927b-a19629315019 706 0 2025-09-12 22:54:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nxg4j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali39d27931be7 [] [] }} ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-" Sep 12 22:55:16.184710 containerd[1570]: 2025-09-12 22:55:15.986 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-eth0" Sep 12 22:55:16.184710 containerd[1570]: 2025-09-12 22:55:16.013 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" HandleID="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Workload="localhost-k8s-csi--node--driver--nxg4j-eth0" Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.013 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" HandleID="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Workload="localhost-k8s-csi--node--driver--nxg4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nxg4j", "timestamp":"2025-09-12 22:55:16.01379616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.014 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.014 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.014 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.020 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" host="localhost" Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.025 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.029 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.031 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.033 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:16.185031 containerd[1570]: 2025-09-12 22:55:16.033 [INFO][4314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" host="localhost" Sep 12 22:55:16.185362 containerd[1570]: 2025-09-12 22:55:16.034 [INFO][4314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7 Sep 12 22:55:16.185362 containerd[1570]: 2025-09-12 22:55:16.087 [INFO][4314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" host="localhost" Sep 12 22:55:16.185362 containerd[1570]: 2025-09-12 22:55:16.102 [INFO][4314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" host="localhost" Sep 12 22:55:16.185362 containerd[1570]: 2025-09-12 22:55:16.102 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" host="localhost" Sep 12 22:55:16.185362 containerd[1570]: 2025-09-12 22:55:16.102 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:16.185362 containerd[1570]: 2025-09-12 22:55:16.102 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" HandleID="k8s-pod-network.4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Workload="localhost-k8s-csi--node--driver--nxg4j-eth0" Sep 12 22:55:16.185540 containerd[1570]: 2025-09-12 22:55:16.105 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxg4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ab10c388-eebf-432c-927b-a19629315019", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nxg4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39d27931be7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:16.185620 containerd[1570]: 2025-09-12 22:55:16.106 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-eth0" Sep 12 22:55:16.185620 containerd[1570]: 2025-09-12 22:55:16.106 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39d27931be7 ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-eth0" Sep 12 22:55:16.185620 containerd[1570]: 2025-09-12 22:55:16.109 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-eth0" Sep 12 22:55:16.185719 containerd[1570]: 2025-09-12 22:55:16.110 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nxg4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ab10c388-eebf-432c-927b-a19629315019", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7", Pod:"csi-node-driver-nxg4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali39d27931be7", MAC:"46:ae:52:54:41:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:16.185794 containerd[1570]: 2025-09-12 22:55:16.174 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" Namespace="calico-system" Pod="csi-node-driver-nxg4j" WorkloadEndpoint="localhost-k8s-csi--node--driver--nxg4j-eth0" Sep 12 22:55:16.436568 kubelet[2772]: E0912 22:55:16.436195 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:16.436568 kubelet[2772]: E0912 22:55:16.436438 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:16.437220 containerd[1570]: time="2025-09-12T22:55:16.437013134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gqvfv,Uid:7f891f46-21f6-47ed-a3cd-51bb989207a3,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:16.437336 containerd[1570]: time="2025-09-12T22:55:16.437243738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fbmsj,Uid:d80c8427-d9fd-47c5-9ec6-d52eec68bfb1,Namespace:kube-system,Attempt:0,}" Sep 12 22:55:16.437378 containerd[1570]: time="2025-09-12T22:55:16.437356596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-v9xlm,Uid:f208b672-cedd-4dca-8408-008a6df49113,Namespace:calico-apiserver,Attempt:0,}" Sep 12 22:55:17.000623 containerd[1570]: time="2025-09-12T22:55:17.000545895Z" level=info msg="connecting to shim 4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7" address="unix:///run/containerd/s/d6c17ec5593cce795788121f79ce3fcccb8318e0e83b3d7bdc48c22a30704b8e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:17.051825 systemd[1]: Started cri-containerd-4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7.scope - libcontainer container 4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7. Sep 12 22:55:17.115726 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:17.145130 containerd[1570]: time="2025-09-12T22:55:17.145067241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nxg4j,Uid:ab10c388-eebf-432c-927b-a19629315019,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7\"" Sep 12 22:55:17.171528 systemd-networkd[1471]: cali09cda4d7056: Link UP Sep 12 22:55:17.173716 systemd-networkd[1471]: cali09cda4d7056: Gained carrier Sep 12 22:55:17.202331 containerd[1570]: 2025-09-12 22:55:16.985 [INFO][4338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0 coredns-7c65d6cfc9- kube-system d80c8427-d9fd-47c5-9ec6-d52eec68bfb1 828 0 2025-09-12 22:54:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-fbmsj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09cda4d7056 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-" Sep 12 22:55:17.202331 containerd[1570]: 2025-09-12 22:55:16.988 [INFO][4338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" Sep 12 22:55:17.202331 containerd[1570]: 2025-09-12 22:55:17.042 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" HandleID="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Workload="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.043 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" HandleID="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Workload="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-fbmsj", "timestamp":"2025-09-12 22:55:17.042713619 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.043 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.043 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.043 [INFO][4411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.083 [INFO][4411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" host="localhost" Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.108 [INFO][4411] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.120 [INFO][4411] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.132 [INFO][4411] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.139 [INFO][4411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.202631 containerd[1570]: 2025-09-12 22:55:17.140 [INFO][4411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" host="localhost" Sep 12 22:55:17.203518 containerd[1570]: 2025-09-12 22:55:17.143 [INFO][4411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d Sep 12 22:55:17.203518 containerd[1570]: 2025-09-12 22:55:17.150 [INFO][4411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" host="localhost" Sep 12 22:55:17.203518 containerd[1570]: 2025-09-12 22:55:17.159 [INFO][4411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" host="localhost" Sep 12 22:55:17.203518 containerd[1570]: 2025-09-12 22:55:17.159 [INFO][4411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" host="localhost" Sep 12 22:55:17.203518 containerd[1570]: 2025-09-12 22:55:17.159 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:17.203518 containerd[1570]: 2025-09-12 22:55:17.159 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" HandleID="k8s-pod-network.b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Workload="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" Sep 12 22:55:17.203643 containerd[1570]: 2025-09-12 22:55:17.166 [INFO][4338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d80c8427-d9fd-47c5-9ec6-d52eec68bfb1", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-fbmsj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09cda4d7056", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.203713 containerd[1570]: 2025-09-12 22:55:17.166 [INFO][4338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" Sep 12 22:55:17.203713 containerd[1570]: 2025-09-12 22:55:17.166 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09cda4d7056 ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" Sep 12 22:55:17.203713 containerd[1570]: 2025-09-12 22:55:17.173 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" Sep 12 22:55:17.203785 containerd[1570]: 2025-09-12 22:55:17.174 [INFO][4338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d80c8427-d9fd-47c5-9ec6-d52eec68bfb1", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d", Pod:"coredns-7c65d6cfc9-fbmsj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09cda4d7056", MAC:"fa:c0:ff:fd:72:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.203785 containerd[1570]: 2025-09-12 22:55:17.196 [INFO][4338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fbmsj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fbmsj-eth0" Sep 12 22:55:17.242296 containerd[1570]: time="2025-09-12T22:55:17.242213900Z" level=info msg="connecting to shim b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d" address="unix:///run/containerd/s/83ac711a92160c00405610ff232843a9e49016122d9dd79fe128e568ecb75740" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:17.255707 systemd-networkd[1471]: cali1ed36eb8d94: Link UP Sep 12 22:55:17.257244 systemd-networkd[1471]: cali1ed36eb8d94: Gained carrier Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:16.985 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0 coredns-7c65d6cfc9- kube-system 7f891f46-21f6-47ed-a3cd-51bb989207a3 819 0 2025-09-12 22:54:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gqvfv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ed36eb8d94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:16.988 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.086 [INFO][4391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" HandleID="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Workload="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.086 [INFO][4391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" HandleID="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Workload="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e470), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gqvfv", "timestamp":"2025-09-12 22:55:17.086177229 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.086 [INFO][4391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.159 [INFO][4391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.159 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.179 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.200 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.217 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.220 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.223 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.223 [INFO][4391] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.225 [INFO][4391] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1 Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.237 [INFO][4391] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.247 [INFO][4391] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.247 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" host="localhost" Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.247 [INFO][4391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:17.286849 containerd[1570]: 2025-09-12 22:55:17.247 [INFO][4391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" HandleID="k8s-pod-network.34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Workload="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" Sep 12 22:55:17.287551 containerd[1570]: 2025-09-12 22:55:17.251 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7f891f46-21f6-47ed-a3cd-51bb989207a3", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gqvfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ed36eb8d94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.287551 containerd[1570]: 2025-09-12 22:55:17.251 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" Sep 12 22:55:17.287551 containerd[1570]: 2025-09-12 22:55:17.251 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ed36eb8d94 ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" Sep 12 22:55:17.287551 containerd[1570]: 2025-09-12 22:55:17.258 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" Sep 12 22:55:17.287551 containerd[1570]: 2025-09-12 22:55:17.259 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7f891f46-21f6-47ed-a3cd-51bb989207a3", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1", Pod:"coredns-7c65d6cfc9-gqvfv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ed36eb8d94", MAC:"aa:6e:0d:00:a0:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.287551 containerd[1570]: 2025-09-12 22:55:17.277 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gqvfv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gqvfv-eth0" Sep 12 22:55:17.289545 systemd[1]: Started cri-containerd-b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d.scope - libcontainer container b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d. Sep 12 22:55:17.307537 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:17.347333 containerd[1570]: time="2025-09-12T22:55:17.347135719Z" level=info msg="connecting to shim 34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1" address="unix:///run/containerd/s/36f633884e37a9b3e0d55a37a297c09644c432427f622893d915ff7a5c0cfdb8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:17.348072 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:51132.service - OpenSSH per-connection server daemon (10.0.0.1:51132). Sep 12 22:55:17.373518 containerd[1570]: time="2025-09-12T22:55:17.373375475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fbmsj,Uid:d80c8427-d9fd-47c5-9ec6-d52eec68bfb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d\"" Sep 12 22:55:17.376760 kubelet[2772]: E0912 22:55:17.376713 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:17.381146 containerd[1570]: time="2025-09-12T22:55:17.380372696Z" level=info msg="CreateContainer within sandbox \"b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:55:17.402489 systemd[1]: Started cri-containerd-34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1.scope - libcontainer container 34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1. Sep 12 22:55:17.402600 systemd-networkd[1471]: caliec7ea1c17f2: Link UP Sep 12 22:55:17.409800 systemd-networkd[1471]: caliec7ea1c17f2: Gained carrier Sep 12 22:55:17.437370 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:17.437538 containerd[1570]: time="2025-09-12T22:55:17.437453792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-49m2v,Uid:77f1da1f-7be6-435c-a995-9d53554099dc,Namespace:calico-system,Attempt:0,}" Sep 12 22:55:17.440201 containerd[1570]: time="2025-09-12T22:55:17.439874135Z" level=info msg="Container db87b2e769f8868e073b19a029b030d4492ad0eae39434eb6401e5622203f5f6: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:16.987 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0 calico-apiserver-7cdcfcc5d6- calico-apiserver f208b672-cedd-4dca-8408-008a6df49113 829 0 2025-09-12 22:54:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cdcfcc5d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cdcfcc5d6-v9xlm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec7ea1c17f2 [] [] }} ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:16.988 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.087 [INFO][4393] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" HandleID="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Workload="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.089 [INFO][4393] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" HandleID="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Workload="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f8ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cdcfcc5d6-v9xlm", "timestamp":"2025-09-12 22:55:17.087478647 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.090 [INFO][4393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.248 [INFO][4393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.248 [INFO][4393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.284 [INFO][4393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.301 [INFO][4393] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.324 [INFO][4393] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.330 [INFO][4393] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.333 [INFO][4393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.333 [INFO][4393] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.335 [INFO][4393] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.344 [INFO][4393] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.374 [INFO][4393] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.374 [INFO][4393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" host="localhost" Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.374 [INFO][4393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:17.443163 containerd[1570]: 2025-09-12 22:55:17.374 [INFO][4393] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" HandleID="k8s-pod-network.1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Workload="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" Sep 12 22:55:17.445751 containerd[1570]: 2025-09-12 22:55:17.390 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0", GenerateName:"calico-apiserver-7cdcfcc5d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f208b672-cedd-4dca-8408-008a6df49113", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdcfcc5d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cdcfcc5d6-v9xlm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec7ea1c17f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.445751 containerd[1570]: 2025-09-12 22:55:17.390 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" Sep 12 22:55:17.445751 containerd[1570]: 2025-09-12 22:55:17.390 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec7ea1c17f2 ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" Sep 12 22:55:17.445751 containerd[1570]: 2025-09-12 22:55:17.412 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" Sep 12 22:55:17.445751 containerd[1570]: 2025-09-12 22:55:17.412 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0", GenerateName:"calico-apiserver-7cdcfcc5d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f208b672-cedd-4dca-8408-008a6df49113", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdcfcc5d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d", Pod:"calico-apiserver-7cdcfcc5d6-v9xlm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec7ea1c17f2", MAC:"ea:4b:5d:13:46:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.445751 containerd[1570]: 2025-09-12 22:55:17.428 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" Namespace="calico-apiserver" Pod="calico-apiserver-7cdcfcc5d6-v9xlm" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdcfcc5d6--v9xlm-eth0" Sep 12 22:55:17.445989 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 51132 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:17.449006 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:17.459327 systemd-logind[1550]: New session 10 of user core. Sep 12 22:55:17.465635 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 22:55:17.474365 containerd[1570]: time="2025-09-12T22:55:17.474296696Z" level=info msg="CreateContainer within sandbox \"b0bb4369b1762c719994c818a17abd811df90bdc0e269b0f5a69fea2b542054d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db87b2e769f8868e073b19a029b030d4492ad0eae39434eb6401e5622203f5f6\"" Sep 12 22:55:17.478712 containerd[1570]: time="2025-09-12T22:55:17.477098645Z" level=info msg="StartContainer for \"db87b2e769f8868e073b19a029b030d4492ad0eae39434eb6401e5622203f5f6\"" Sep 12 22:55:17.479675 containerd[1570]: time="2025-09-12T22:55:17.479613449Z" level=info msg="connecting to shim db87b2e769f8868e073b19a029b030d4492ad0eae39434eb6401e5622203f5f6" address="unix:///run/containerd/s/83ac711a92160c00405610ff232843a9e49016122d9dd79fe128e568ecb75740" protocol=ttrpc version=3 Sep 12 22:55:17.498636 containerd[1570]: time="2025-09-12T22:55:17.498561577Z" level=info msg="connecting to shim 1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d" address="unix:///run/containerd/s/858c43a449116801044c437391a673cc7535a7152845f16d83d58e59e57f6b12" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:17.516718 systemd[1]: Started cri-containerd-db87b2e769f8868e073b19a029b030d4492ad0eae39434eb6401e5622203f5f6.scope - libcontainer container db87b2e769f8868e073b19a029b030d4492ad0eae39434eb6401e5622203f5f6. Sep 12 22:55:17.537662 containerd[1570]: time="2025-09-12T22:55:17.537570724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gqvfv,Uid:7f891f46-21f6-47ed-a3cd-51bb989207a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1\"" Sep 12 22:55:17.540672 kubelet[2772]: E0912 22:55:17.539969 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:17.544129 containerd[1570]: time="2025-09-12T22:55:17.544082050Z" level=info msg="CreateContainer within sandbox \"34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:55:17.545720 systemd[1]: Started cri-containerd-1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d.scope - libcontainer container 1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d. Sep 12 22:55:17.564500 containerd[1570]: time="2025-09-12T22:55:17.564214228Z" level=info msg="Container 528e9dccfdd001fbc012097f695f87e5c9a69f69641d3dad0e73c9fdac732b13: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:17.579979 containerd[1570]: time="2025-09-12T22:55:17.579623236Z" level=info msg="CreateContainer within sandbox \"34868638078a7ea5cc1acb907e99ea5538dbf76d631f258bae7b3e7c8e3b2ee1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"528e9dccfdd001fbc012097f695f87e5c9a69f69641d3dad0e73c9fdac732b13\"" Sep 12 22:55:17.581864 containerd[1570]: time="2025-09-12T22:55:17.581752909Z" level=info msg="StartContainer for \"528e9dccfdd001fbc012097f695f87e5c9a69f69641d3dad0e73c9fdac732b13\"" Sep 12 22:55:17.587984 containerd[1570]: time="2025-09-12T22:55:17.586890837Z" level=info msg="connecting to shim 528e9dccfdd001fbc012097f695f87e5c9a69f69641d3dad0e73c9fdac732b13" address="unix:///run/containerd/s/36f633884e37a9b3e0d55a37a297c09644c432427f622893d915ff7a5c0cfdb8" protocol=ttrpc version=3 Sep 12 22:55:17.630857 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:17.656051 containerd[1570]: time="2025-09-12T22:55:17.655977896Z" level=info msg="StartContainer for \"db87b2e769f8868e073b19a029b030d4492ad0eae39434eb6401e5622203f5f6\" returns successfully" Sep 12 22:55:17.659943 systemd[1]: Started cri-containerd-528e9dccfdd001fbc012097f695f87e5c9a69f69641d3dad0e73c9fdac732b13.scope - libcontainer container 528e9dccfdd001fbc012097f695f87e5c9a69f69641d3dad0e73c9fdac732b13. Sep 12 22:55:17.666986 kubelet[2772]: E0912 22:55:17.666790 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:17.737552 containerd[1570]: time="2025-09-12T22:55:17.726667361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdcfcc5d6-v9xlm,Uid:f208b672-cedd-4dca-8408-008a6df49113,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d\"" Sep 12 22:55:17.779559 sshd[4584]: Connection closed by 10.0.0.1 port 51132 Sep 12 22:55:17.782131 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:17.791484 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Sep 12 22:55:17.791961 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:51132.service: Deactivated successfully. Sep 12 22:55:17.797442 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 22:55:17.797708 systemd-networkd[1471]: cali5413d06cd98: Link UP Sep 12 22:55:17.799786 systemd-networkd[1471]: cali5413d06cd98: Gained carrier Sep 12 22:55:17.802577 systemd-logind[1550]: Removed session 10. Sep 12 22:55:17.821860 containerd[1570]: time="2025-09-12T22:55:17.820817347Z" level=info msg="StartContainer for \"528e9dccfdd001fbc012097f695f87e5c9a69f69641d3dad0e73c9fdac732b13\" returns successfully" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.561 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--49m2v-eth0 goldmane-7988f88666- calico-system 77f1da1f-7be6-435c-a995-9d53554099dc 826 0 2025-09-12 22:54:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-49m2v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5413d06cd98 [] [] }} ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.561 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-eth0" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.642 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" HandleID="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Workload="localhost-k8s-goldmane--7988f88666--49m2v-eth0" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.643 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" HandleID="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Workload="localhost-k8s-goldmane--7988f88666--49m2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335900), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-49m2v", "timestamp":"2025-09-12 22:55:17.642448388 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.644 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.644 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.644 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.658 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.712 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.741 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.748 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.755 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.755 [INFO][4670] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.757 [INFO][4670] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87 Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.764 [INFO][4670] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.777 [INFO][4670] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.777 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" host="localhost" Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.777 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 22:55:17.829629 containerd[1570]: 2025-09-12 22:55:17.777 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" HandleID="k8s-pod-network.b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Workload="localhost-k8s-goldmane--7988f88666--49m2v-eth0" Sep 12 22:55:17.830908 kubelet[2772]: I0912 22:55:17.829514 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fbmsj" podStartSLOduration=44.829489444000004 podStartE2EDuration="44.829489444s" podCreationTimestamp="2025-09-12 22:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:55:17.69436263 +0000 UTC m=+49.365389980" watchObservedRunningTime="2025-09-12 22:55:17.829489444 +0000 UTC m=+49.500516774" Sep 12 22:55:17.831045 containerd[1570]: 2025-09-12 22:55:17.784 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--49m2v-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"77f1da1f-7be6-435c-a995-9d53554099dc", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-49m2v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5413d06cd98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.831045 containerd[1570]: 2025-09-12 22:55:17.787 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-eth0" Sep 12 22:55:17.831045 containerd[1570]: 2025-09-12 22:55:17.787 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5413d06cd98 ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-eth0" Sep 12 22:55:17.831045 containerd[1570]: 2025-09-12 22:55:17.803 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-eth0" Sep 12 22:55:17.831045 containerd[1570]: 2025-09-12 22:55:17.804 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--49m2v-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"77f1da1f-7be6-435c-a995-9d53554099dc", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 22, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87", Pod:"goldmane-7988f88666-49m2v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5413d06cd98", MAC:"82:07:df:f0:ab:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 22:55:17.831045 containerd[1570]: 2025-09-12 22:55:17.821 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" Namespace="calico-system" Pod="goldmane-7988f88666-49m2v" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--49m2v-eth0" Sep 12 22:55:17.890926 containerd[1570]: time="2025-09-12T22:55:17.890858983Z" level=info msg="connecting to shim b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87" address="unix:///run/containerd/s/21d91dff18e56ec040731cd7918be70ffbe2975526dcaf707ecdf7f9339f1e05" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:55:17.902293 containerd[1570]: time="2025-09-12T22:55:17.899077959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:17.903129 containerd[1570]: time="2025-09-12T22:55:17.902990867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 22:55:17.906382 containerd[1570]: time="2025-09-12T22:55:17.906174421Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:17.913285 containerd[1570]: time="2025-09-12T22:55:17.913134200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:17.914982 containerd[1570]: time="2025-09-12T22:55:17.914942915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.932440328s" Sep 12 22:55:17.914982 containerd[1570]: time="2025-09-12T22:55:17.914978624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 22:55:17.916434 containerd[1570]: time="2025-09-12T22:55:17.916404632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 22:55:17.918011 containerd[1570]: time="2025-09-12T22:55:17.917967252Z" level=info msg="CreateContainer within sandbox \"a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 22:55:17.937337 containerd[1570]: time="2025-09-12T22:55:17.937292345Z" level=info msg="Container 4f5b4ef8e6ac5ff99223150aa12379ec327afdf22bb547943cca98c57ab2a071: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:17.941601 systemd[1]: Started cri-containerd-b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87.scope - libcontainer container b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87. Sep 12 22:55:17.954753 containerd[1570]: time="2025-09-12T22:55:17.954706085Z" level=info msg="CreateContainer within sandbox \"a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4f5b4ef8e6ac5ff99223150aa12379ec327afdf22bb547943cca98c57ab2a071\"" Sep 12 22:55:17.955845 containerd[1570]: time="2025-09-12T22:55:17.955490206Z" level=info msg="StartContainer for \"4f5b4ef8e6ac5ff99223150aa12379ec327afdf22bb547943cca98c57ab2a071\"" Sep 12 22:55:17.957192 containerd[1570]: time="2025-09-12T22:55:17.957149313Z" level=info msg="connecting to shim 4f5b4ef8e6ac5ff99223150aa12379ec327afdf22bb547943cca98c57ab2a071" address="unix:///run/containerd/s/6b0f379fe054d55929acb1dad6b25e37b85462e7300a1a2cc74b0d50d25e20a3" protocol=ttrpc version=3 Sep 12 22:55:17.966351 systemd-resolved[1407]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 22:55:17.988528 systemd-networkd[1471]: cali39d27931be7: Gained IPv6LL Sep 12 22:55:18.028325 containerd[1570]: time="2025-09-12T22:55:18.028223182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-49m2v,Uid:77f1da1f-7be6-435c-a995-9d53554099dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87\"" Sep 12 22:55:18.039374 kubelet[2772]: I0912 22:55:18.039036 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:55:18.044424 systemd[1]: Started cri-containerd-4f5b4ef8e6ac5ff99223150aa12379ec327afdf22bb547943cca98c57ab2a071.scope - libcontainer container 4f5b4ef8e6ac5ff99223150aa12379ec327afdf22bb547943cca98c57ab2a071. Sep 12 22:55:18.107525 containerd[1570]: time="2025-09-12T22:55:18.107411253Z" level=info msg="StartContainer for \"4f5b4ef8e6ac5ff99223150aa12379ec327afdf22bb547943cca98c57ab2a071\" returns successfully" Sep 12 22:55:18.218609 containerd[1570]: time="2025-09-12T22:55:18.218529820Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6\" id:\"ffb10c059f98f6cf53f8a94fe9c730ce8d3f344882a5694ba4d6de52811904db\" pid:4853 exited_at:{seconds:1757717718 nanos:218125852}" Sep 12 22:55:18.332192 containerd[1570]: time="2025-09-12T22:55:18.332014763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6\" id:\"da3c7b7f1cebebefb0c5515a93d53fe562f3d1aa02fda74f03e75096c6e0e202\" pid:4880 exited_at:{seconds:1757717718 nanos:331497817}" Sep 12 22:55:18.372619 systemd-networkd[1471]: cali1ed36eb8d94: Gained IPv6LL Sep 12 22:55:18.671712 kubelet[2772]: E0912 22:55:18.671562 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:18.675169 kubelet[2772]: E0912 22:55:18.675097 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:18.712861 kubelet[2772]: I0912 22:55:18.712772 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gqvfv" podStartSLOduration=45.712749343 podStartE2EDuration="45.712749343s" podCreationTimestamp="2025-09-12 22:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:55:18.71230106 +0000 UTC m=+50.383328400" watchObservedRunningTime="2025-09-12 22:55:18.712749343 +0000 UTC m=+50.383776673" Sep 12 22:55:18.884546 systemd-networkd[1471]: cali5413d06cd98: Gained IPv6LL Sep 12 22:55:18.885253 systemd-networkd[1471]: caliec7ea1c17f2: Gained IPv6LL Sep 12 22:55:19.013454 systemd-networkd[1471]: cali09cda4d7056: Gained IPv6LL Sep 12 22:55:19.680673 kubelet[2772]: E0912 22:55:19.680631 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:19.681323 kubelet[2772]: E0912 22:55:19.680744 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:20.683600 kubelet[2772]: E0912 22:55:20.683525 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:20.685773 kubelet[2772]: E0912 22:55:20.685703 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:21.712610 containerd[1570]: time="2025-09-12T22:55:21.712522046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:21.714621 containerd[1570]: time="2025-09-12T22:55:21.714310101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 22:55:21.716448 containerd[1570]: time="2025-09-12T22:55:21.716406099Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:21.719952 containerd[1570]: time="2025-09-12T22:55:21.719892427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:21.721139 containerd[1570]: time="2025-09-12T22:55:21.720677936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.804241083s" Sep 12 22:55:21.721139 containerd[1570]: time="2025-09-12T22:55:21.720719376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 22:55:21.722114 containerd[1570]: time="2025-09-12T22:55:21.722067217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 22:55:21.734307 containerd[1570]: time="2025-09-12T22:55:21.734149152Z" level=info msg="CreateContainer within sandbox \"a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 22:55:21.750518 containerd[1570]: time="2025-09-12T22:55:21.750440985Z" level=info msg="Container 6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:21.768424 containerd[1570]: time="2025-09-12T22:55:21.768353885Z" level=info msg="CreateContainer within sandbox \"a1cad1d852be13155c6ebcbce26702b4bb77e2102661655440c2706595b35048\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab\"" Sep 12 22:55:21.769015 containerd[1570]: time="2025-09-12T22:55:21.768985558Z" level=info msg="StartContainer for \"6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab\"" Sep 12 22:55:21.770807 containerd[1570]: time="2025-09-12T22:55:21.770706856Z" level=info msg="connecting to shim 6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab" address="unix:///run/containerd/s/9a0562a94248533923bcb2c33472eda3be9f50661bc983b27d02c824e7988487" protocol=ttrpc version=3 Sep 12 22:55:21.805512 systemd[1]: Started cri-containerd-6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab.scope - libcontainer container 6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab. Sep 12 22:55:21.869547 containerd[1570]: time="2025-09-12T22:55:21.869446631Z" level=info msg="StartContainer for \"6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab\" returns successfully" Sep 12 22:55:22.705330 kubelet[2772]: I0912 22:55:22.705234 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d58869f5c-5rkng" podStartSLOduration=25.998596295 podStartE2EDuration="33.70521471s" podCreationTimestamp="2025-09-12 22:54:49 +0000 UTC" firstStartedPulling="2025-09-12 22:55:14.015211957 +0000 UTC m=+45.686239287" lastFinishedPulling="2025-09-12 22:55:21.721830372 +0000 UTC m=+53.392857702" observedRunningTime="2025-09-12 22:55:22.704779674 +0000 UTC m=+54.375807004" watchObservedRunningTime="2025-09-12 22:55:22.70521471 +0000 UTC m=+54.376242040" Sep 12 22:55:22.741198 containerd[1570]: time="2025-09-12T22:55:22.741143363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab\" id:\"777651bdd20508760ca48ae1bc066015d8716d22ac7bc9c2a3eb3cb69a330864\" pid:4966 exited_at:{seconds:1757717722 nanos:740632672}" Sep 12 22:55:22.800962 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:45336.service - OpenSSH per-connection server daemon (10.0.0.1:45336). Sep 12 22:55:22.886429 sshd[4977]: Accepted publickey for core from 10.0.0.1 port 45336 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:22.889174 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:22.897944 systemd-logind[1550]: New session 11 of user core. Sep 12 22:55:22.907675 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 22:55:23.366100 sshd[4982]: Connection closed by 10.0.0.1 port 45336 Sep 12 22:55:23.366525 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:23.371084 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:45336.service: Deactivated successfully. Sep 12 22:55:23.373847 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 22:55:23.375342 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Sep 12 22:55:23.377343 systemd-logind[1550]: Removed session 11. Sep 12 22:55:26.439678 kernel: hrtimer: interrupt took 7402752 ns Sep 12 22:55:26.765235 containerd[1570]: time="2025-09-12T22:55:26.763316556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:26.790327 containerd[1570]: time="2025-09-12T22:55:26.790198767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 22:55:26.820309 containerd[1570]: time="2025-09-12T22:55:26.818201436Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:26.848555 containerd[1570]: time="2025-09-12T22:55:26.848475105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:26.849515 containerd[1570]: time="2025-09-12T22:55:26.849463398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.127356016s" Sep 12 22:55:26.849515 containerd[1570]: time="2025-09-12T22:55:26.849505830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 22:55:26.855079 containerd[1570]: time="2025-09-12T22:55:26.853820887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 22:55:26.871591 containerd[1570]: time="2025-09-12T22:55:26.868404957Z" level=info msg="CreateContainer within sandbox \"1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 22:55:27.140313 containerd[1570]: time="2025-09-12T22:55:27.138584497Z" level=info msg="Container 2664bedd6911c515bf79d81c0522893366d0f0b9a197ce7bc29d0d22743f826f: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:27.163386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803841041.mount: Deactivated successfully. Sep 12 22:55:27.940773 containerd[1570]: time="2025-09-12T22:55:27.940685899Z" level=info msg="CreateContainer within sandbox \"1cd7760a1787aa71049e18a0d954ab8cbee8547ad4d2973990f7e5fa4fab16a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2664bedd6911c515bf79d81c0522893366d0f0b9a197ce7bc29d0d22743f826f\"" Sep 12 22:55:27.943376 containerd[1570]: time="2025-09-12T22:55:27.942922461Z" level=info msg="StartContainer for \"2664bedd6911c515bf79d81c0522893366d0f0b9a197ce7bc29d0d22743f826f\"" Sep 12 22:55:27.945090 containerd[1570]: time="2025-09-12T22:55:27.945046047Z" level=info msg="connecting to shim 2664bedd6911c515bf79d81c0522893366d0f0b9a197ce7bc29d0d22743f826f" address="unix:///run/containerd/s/de56b504b086eb1592ed50731297d6c95138bd67378893fee18a18755db0de68" protocol=ttrpc version=3 Sep 12 22:55:27.986589 systemd[1]: Started cri-containerd-2664bedd6911c515bf79d81c0522893366d0f0b9a197ce7bc29d0d22743f826f.scope - libcontainer container 2664bedd6911c515bf79d81c0522893366d0f0b9a197ce7bc29d0d22743f826f. Sep 12 22:55:28.245431 containerd[1570]: time="2025-09-12T22:55:28.245057882Z" level=info msg="StartContainer for \"2664bedd6911c515bf79d81c0522893366d0f0b9a197ce7bc29d0d22743f826f\" returns successfully" Sep 12 22:55:28.381601 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:45348.service - OpenSSH per-connection server daemon (10.0.0.1:45348). Sep 12 22:55:28.592129 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 45348 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:28.594905 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:28.601935 systemd-logind[1550]: New session 12 of user core. Sep 12 22:55:28.613613 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 22:55:28.840241 sshd[5054]: Connection closed by 10.0.0.1 port 45348 Sep 12 22:55:28.840068 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:28.845975 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:45348.service: Deactivated successfully. Sep 12 22:55:28.848850 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 22:55:28.851233 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Sep 12 22:55:28.854071 systemd-logind[1550]: Removed session 12. Sep 12 22:55:29.620742 containerd[1570]: time="2025-09-12T22:55:29.620346331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:29.624989 containerd[1570]: time="2025-09-12T22:55:29.624471883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 22:55:29.628291 containerd[1570]: time="2025-09-12T22:55:29.626464665Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:29.633999 containerd[1570]: time="2025-09-12T22:55:29.633918182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:29.635046 containerd[1570]: time="2025-09-12T22:55:29.634998809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.7811296s" Sep 12 22:55:29.635046 containerd[1570]: time="2025-09-12T22:55:29.635039317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 22:55:29.637638 containerd[1570]: time="2025-09-12T22:55:29.637316112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 22:55:29.639604 containerd[1570]: time="2025-09-12T22:55:29.639567629Z" level=info msg="CreateContainer within sandbox \"4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 22:55:29.691103 containerd[1570]: time="2025-09-12T22:55:29.690960787Z" level=info msg="Container 27ae8196b55781814ebb133a402412e19ed41ae5900fca4ccaacb8bafef670d5: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:29.714862 kubelet[2772]: I0912 22:55:29.714793 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:55:29.743915 containerd[1570]: time="2025-09-12T22:55:29.743814639Z" level=info msg="CreateContainer within sandbox \"4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"27ae8196b55781814ebb133a402412e19ed41ae5900fca4ccaacb8bafef670d5\"" Sep 12 22:55:29.744696 containerd[1570]: time="2025-09-12T22:55:29.744635289Z" level=info msg="StartContainer for \"27ae8196b55781814ebb133a402412e19ed41ae5900fca4ccaacb8bafef670d5\"" Sep 12 22:55:29.747051 containerd[1570]: time="2025-09-12T22:55:29.747000854Z" level=info msg="connecting to shim 27ae8196b55781814ebb133a402412e19ed41ae5900fca4ccaacb8bafef670d5" address="unix:///run/containerd/s/d6c17ec5593cce795788121f79ce3fcccb8318e0e83b3d7bdc48c22a30704b8e" protocol=ttrpc version=3 Sep 12 22:55:29.778565 systemd[1]: Started cri-containerd-27ae8196b55781814ebb133a402412e19ed41ae5900fca4ccaacb8bafef670d5.scope - libcontainer container 27ae8196b55781814ebb133a402412e19ed41ae5900fca4ccaacb8bafef670d5. Sep 12 22:55:29.842234 containerd[1570]: time="2025-09-12T22:55:29.842133589Z" level=info msg="StartContainer for \"27ae8196b55781814ebb133a402412e19ed41ae5900fca4ccaacb8bafef670d5\" returns successfully" Sep 12 22:55:30.021728 containerd[1570]: time="2025-09-12T22:55:30.021620874Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:30.024244 containerd[1570]: time="2025-09-12T22:55:30.024121445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 22:55:30.028976 containerd[1570]: time="2025-09-12T22:55:30.027471510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 389.984061ms" Sep 12 22:55:30.028976 containerd[1570]: time="2025-09-12T22:55:30.028228117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 22:55:30.032094 containerd[1570]: time="2025-09-12T22:55:30.032050865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 22:55:30.032476 containerd[1570]: time="2025-09-12T22:55:30.032227984Z" level=info msg="CreateContainer within sandbox \"1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 22:55:30.049082 containerd[1570]: time="2025-09-12T22:55:30.048363234Z" level=info msg="Container 8fd132fd9206d0e77566e4318879c461ac7066f2c2f972424b518fb0284b8e8f: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:30.064154 containerd[1570]: time="2025-09-12T22:55:30.063986945Z" level=info msg="CreateContainer within sandbox \"1b3ed47ed6e78e703fa377a9d9a09129bd84e0bb172ac7ce2842fe3107b8d57d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8fd132fd9206d0e77566e4318879c461ac7066f2c2f972424b518fb0284b8e8f\"" Sep 12 22:55:30.065189 containerd[1570]: time="2025-09-12T22:55:30.065099412Z" level=info msg="StartContainer for \"8fd132fd9206d0e77566e4318879c461ac7066f2c2f972424b518fb0284b8e8f\"" Sep 12 22:55:30.066736 containerd[1570]: time="2025-09-12T22:55:30.066680084Z" level=info msg="connecting to shim 8fd132fd9206d0e77566e4318879c461ac7066f2c2f972424b518fb0284b8e8f" address="unix:///run/containerd/s/858c43a449116801044c437391a673cc7535a7152845f16d83d58e59e57f6b12" protocol=ttrpc version=3 Sep 12 22:55:30.099677 systemd[1]: Started cri-containerd-8fd132fd9206d0e77566e4318879c461ac7066f2c2f972424b518fb0284b8e8f.scope - libcontainer container 8fd132fd9206d0e77566e4318879c461ac7066f2c2f972424b518fb0284b8e8f. Sep 12 22:55:30.185466 containerd[1570]: time="2025-09-12T22:55:30.185401130Z" level=info msg="StartContainer for \"8fd132fd9206d0e77566e4318879c461ac7066f2c2f972424b518fb0284b8e8f\" returns successfully" Sep 12 22:55:30.738299 kubelet[2772]: I0912 22:55:30.738162 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-clg8x" podStartSLOduration=32.591850537 podStartE2EDuration="44.738140799s" podCreationTimestamp="2025-09-12 22:54:46 +0000 UTC" firstStartedPulling="2025-09-12 22:55:14.70515837 +0000 UTC m=+46.376185700" lastFinishedPulling="2025-09-12 22:55:26.851448632 +0000 UTC m=+58.522475962" observedRunningTime="2025-09-12 22:55:28.822036385 +0000 UTC m=+60.493063725" watchObservedRunningTime="2025-09-12 22:55:30.738140799 +0000 UTC m=+62.409168130" Sep 12 22:55:31.724482 kubelet[2772]: I0912 22:55:31.724434 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:55:32.123821 containerd[1570]: time="2025-09-12T22:55:32.123769535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab\" id:\"0c274b276318eaabe39a83bd636455441598b8e5a595a899b96feb851c8a34d1\" pid:5155 exited_at:{seconds:1757717732 nanos:123490682}" Sep 12 22:55:33.734953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767320534.mount: Deactivated successfully. Sep 12 22:55:33.866928 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:38710.service - OpenSSH per-connection server daemon (10.0.0.1:38710). Sep 12 22:55:33.965623 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 38710 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:33.972356 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:33.994513 systemd-logind[1550]: New session 13 of user core. Sep 12 22:55:34.006558 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 22:55:34.176896 sshd[5174]: Connection closed by 10.0.0.1 port 38710 Sep 12 22:55:34.177414 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:34.189582 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:38710.service: Deactivated successfully. Sep 12 22:55:34.192098 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 22:55:34.193551 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Sep 12 22:55:34.197742 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:38716.service - OpenSSH per-connection server daemon (10.0.0.1:38716). Sep 12 22:55:34.198776 systemd-logind[1550]: Removed session 13. Sep 12 22:55:34.262651 sshd[5194]: Accepted publickey for core from 10.0.0.1 port 38716 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:34.264915 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:34.271870 systemd-logind[1550]: New session 14 of user core. Sep 12 22:55:34.283626 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 22:55:34.510105 sshd[5197]: Connection closed by 10.0.0.1 port 38716 Sep 12 22:55:34.511566 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:34.525468 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:38716.service: Deactivated successfully. Sep 12 22:55:34.534446 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 22:55:34.536928 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Sep 12 22:55:34.542745 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:38722.service - OpenSSH per-connection server daemon (10.0.0.1:38722). Sep 12 22:55:34.548316 systemd-logind[1550]: Removed session 14. Sep 12 22:55:34.622141 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 38722 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:34.624579 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:34.632514 systemd-logind[1550]: New session 15 of user core. Sep 12 22:55:34.641637 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 22:55:34.802478 sshd[5216]: Connection closed by 10.0.0.1 port 38722 Sep 12 22:55:34.802784 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:34.810512 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Sep 12 22:55:34.810887 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:38722.service: Deactivated successfully. Sep 12 22:55:34.814954 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 22:55:34.819039 systemd-logind[1550]: Removed session 15. Sep 12 22:55:35.156128 containerd[1570]: time="2025-09-12T22:55:35.155961999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:35.157356 containerd[1570]: time="2025-09-12T22:55:35.157329539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 22:55:35.158811 containerd[1570]: time="2025-09-12T22:55:35.158758505Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:35.161427 containerd[1570]: time="2025-09-12T22:55:35.161394423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:35.162283 containerd[1570]: time="2025-09-12T22:55:35.162213947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.130119559s" Sep 12 22:55:35.162361 containerd[1570]: time="2025-09-12T22:55:35.162287537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 22:55:35.163484 containerd[1570]: time="2025-09-12T22:55:35.163456086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 22:55:35.164527 containerd[1570]: time="2025-09-12T22:55:35.164498575Z" level=info msg="CreateContainer within sandbox \"b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 22:55:35.175528 containerd[1570]: time="2025-09-12T22:55:35.174979609Z" level=info msg="Container 19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:35.186510 containerd[1570]: time="2025-09-12T22:55:35.186437838Z" level=info msg="CreateContainer within sandbox \"b3243351637ef54b0f3b7a0b917ada2d656442df72ac54a278c4dc8a99bb3a87\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585\"" Sep 12 22:55:35.187727 containerd[1570]: time="2025-09-12T22:55:35.187195373Z" level=info msg="StartContainer for \"19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585\"" Sep 12 22:55:35.188622 containerd[1570]: time="2025-09-12T22:55:35.188590504Z" level=info msg="connecting to shim 19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585" address="unix:///run/containerd/s/21d91dff18e56ec040731cd7918be70ffbe2975526dcaf707ecdf7f9339f1e05" protocol=ttrpc version=3 Sep 12 22:55:35.250491 systemd[1]: Started cri-containerd-19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585.scope - libcontainer container 19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585. Sep 12 22:55:35.311295 containerd[1570]: time="2025-09-12T22:55:35.310838316Z" level=info msg="StartContainer for \"19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585\" returns successfully" Sep 12 22:55:35.812303 kubelet[2772]: I0912 22:55:35.812132 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cdcfcc5d6-v9xlm" podStartSLOduration=37.513374775 podStartE2EDuration="49.812109289s" podCreationTimestamp="2025-09-12 22:54:46 +0000 UTC" firstStartedPulling="2025-09-12 22:55:17.73132283 +0000 UTC m=+49.402350160" lastFinishedPulling="2025-09-12 22:55:30.030057344 +0000 UTC m=+61.701084674" observedRunningTime="2025-09-12 22:55:30.739531108 +0000 UTC m=+62.410558448" watchObservedRunningTime="2025-09-12 22:55:35.812109289 +0000 UTC m=+67.483136619" Sep 12 22:55:35.824749 containerd[1570]: time="2025-09-12T22:55:35.824699247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585\" id:\"a43f0b84f6282c8f21cdd60f5e5ae312c45adc9ce8aa745f61b1c3bae7969d78\" pid:5279 exit_status:1 exited_at:{seconds:1757717735 nanos:824146562}" Sep 12 22:55:36.875421 containerd[1570]: time="2025-09-12T22:55:36.875249783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585\" id:\"24331e1f4e3de20920052588c4f73a045f7a519b525e164309a9092167ee3e42\" pid:5305 exit_status:1 exited_at:{seconds:1757717736 nanos:874825334}" Sep 12 22:55:37.320832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65735077.mount: Deactivated successfully. Sep 12 22:55:38.040196 containerd[1570]: time="2025-09-12T22:55:38.040106956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:38.041324 containerd[1570]: time="2025-09-12T22:55:38.041295070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 22:55:38.043485 containerd[1570]: time="2025-09-12T22:55:38.043398918Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:38.045963 containerd[1570]: time="2025-09-12T22:55:38.045904733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:38.046938 containerd[1570]: time="2025-09-12T22:55:38.046880843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.883394719s" Sep 12 22:55:38.047009 containerd[1570]: time="2025-09-12T22:55:38.046942150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 22:55:38.048706 containerd[1570]: time="2025-09-12T22:55:38.048661246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 22:55:38.050138 containerd[1570]: time="2025-09-12T22:55:38.050105818Z" level=info msg="CreateContainer within sandbox \"a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 22:55:38.067062 containerd[1570]: time="2025-09-12T22:55:38.066986985Z" level=info msg="Container c7acf7005d297c3497b9fd578fb2025e9594cfc5dea9fdd04bb0cc5bafd919c8: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:38.082886 containerd[1570]: time="2025-09-12T22:55:38.082758546Z" level=info msg="CreateContainer within sandbox \"a54e8e5f55104cc27f10c7d02832a0eec129fb673455e529497b636cf9ab24b0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c7acf7005d297c3497b9fd578fb2025e9594cfc5dea9fdd04bb0cc5bafd919c8\"" Sep 12 22:55:38.083627 containerd[1570]: time="2025-09-12T22:55:38.083554162Z" level=info msg="StartContainer for \"c7acf7005d297c3497b9fd578fb2025e9594cfc5dea9fdd04bb0cc5bafd919c8\"" Sep 12 22:55:38.085253 containerd[1570]: time="2025-09-12T22:55:38.085197173Z" level=info msg="connecting to shim c7acf7005d297c3497b9fd578fb2025e9594cfc5dea9fdd04bb0cc5bafd919c8" address="unix:///run/containerd/s/6b0f379fe054d55929acb1dad6b25e37b85462e7300a1a2cc74b0d50d25e20a3" protocol=ttrpc version=3 Sep 12 22:55:38.124573 systemd[1]: Started cri-containerd-c7acf7005d297c3497b9fd578fb2025e9594cfc5dea9fdd04bb0cc5bafd919c8.scope - libcontainer container c7acf7005d297c3497b9fd578fb2025e9594cfc5dea9fdd04bb0cc5bafd919c8. Sep 12 22:55:38.187229 containerd[1570]: time="2025-09-12T22:55:38.187157605Z" level=info msg="StartContainer for \"c7acf7005d297c3497b9fd578fb2025e9594cfc5dea9fdd04bb0cc5bafd919c8\" returns successfully" Sep 12 22:55:38.764910 kubelet[2772]: I0912 22:55:38.764292 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-49m2v" podStartSLOduration=33.630721354 podStartE2EDuration="50.764252796s" podCreationTimestamp="2025-09-12 22:54:48 +0000 UTC" firstStartedPulling="2025-09-12 22:55:18.029742757 +0000 UTC m=+49.700770087" lastFinishedPulling="2025-09-12 22:55:35.163274189 +0000 UTC m=+66.834301529" observedRunningTime="2025-09-12 22:55:35.814771778 +0000 UTC m=+67.485799118" watchObservedRunningTime="2025-09-12 22:55:38.764252796 +0000 UTC m=+70.435280136" Sep 12 22:55:39.829739 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:38738.service - OpenSSH per-connection server daemon (10.0.0.1:38738). Sep 12 22:55:39.933739 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 38738 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:39.936302 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:39.943789 systemd-logind[1550]: New session 16 of user core. Sep 12 22:55:39.953441 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 22:55:40.161966 sshd[5371]: Connection closed by 10.0.0.1 port 38738 Sep 12 22:55:40.162315 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:40.168345 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:38738.service: Deactivated successfully. Sep 12 22:55:40.171232 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 22:55:40.172246 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Sep 12 22:55:40.174577 systemd-logind[1550]: Removed session 16. Sep 12 22:55:40.436759 kubelet[2772]: E0912 22:55:40.436578 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:40.839163 containerd[1570]: time="2025-09-12T22:55:40.839071103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:40.842472 containerd[1570]: time="2025-09-12T22:55:40.842403668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 22:55:40.844413 containerd[1570]: time="2025-09-12T22:55:40.844367627Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:40.850479 containerd[1570]: time="2025-09-12T22:55:40.850430482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:55:40.851556 containerd[1570]: time="2025-09-12T22:55:40.851512332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.802805389s" Sep 12 22:55:40.851556 containerd[1570]: time="2025-09-12T22:55:40.851544643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 22:55:40.853837 containerd[1570]: time="2025-09-12T22:55:40.853813263Z" level=info msg="CreateContainer within sandbox \"4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 22:55:40.866684 containerd[1570]: time="2025-09-12T22:55:40.866586735Z" level=info msg="Container f8172df42573df86caf41a19bd8134649dcd85d2edded719c5e6684f46a74e83: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:55:40.909090 containerd[1570]: time="2025-09-12T22:55:40.881321181Z" level=info msg="CreateContainer within sandbox \"4e37c90d0b44df8d0b9f0f8248f5a4c87e9b24b10f6af99aebb1ef46a9d9ece7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f8172df42573df86caf41a19bd8134649dcd85d2edded719c5e6684f46a74e83\"" Sep 12 22:55:40.909896 containerd[1570]: time="2025-09-12T22:55:40.909825635Z" level=info msg="StartContainer for \"f8172df42573df86caf41a19bd8134649dcd85d2edded719c5e6684f46a74e83\"" Sep 12 22:55:40.912159 containerd[1570]: time="2025-09-12T22:55:40.912108572Z" level=info msg="connecting to shim f8172df42573df86caf41a19bd8134649dcd85d2edded719c5e6684f46a74e83" address="unix:///run/containerd/s/d6c17ec5593cce795788121f79ce3fcccb8318e0e83b3d7bdc48c22a30704b8e" protocol=ttrpc version=3 Sep 12 22:55:40.953617 systemd[1]: Started cri-containerd-f8172df42573df86caf41a19bd8134649dcd85d2edded719c5e6684f46a74e83.scope - libcontainer container f8172df42573df86caf41a19bd8134649dcd85d2edded719c5e6684f46a74e83. Sep 12 22:55:41.012541 containerd[1570]: time="2025-09-12T22:55:41.012475541Z" level=info msg="StartContainer for \"f8172df42573df86caf41a19bd8134649dcd85d2edded719c5e6684f46a74e83\" returns successfully" Sep 12 22:55:41.520867 kubelet[2772]: I0912 22:55:41.520824 2772 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 22:55:41.520867 kubelet[2772]: I0912 22:55:41.520864 2772 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 22:55:42.059299 kubelet[2772]: I0912 22:55:42.058988 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nxg4j" podStartSLOduration=29.353142522 podStartE2EDuration="53.058910065s" podCreationTimestamp="2025-09-12 22:54:49 +0000 UTC" firstStartedPulling="2025-09-12 22:55:17.146699296 +0000 UTC m=+48.817726626" lastFinishedPulling="2025-09-12 22:55:40.852466839 +0000 UTC m=+72.523494169" observedRunningTime="2025-09-12 22:55:42.058571781 +0000 UTC m=+73.729599112" watchObservedRunningTime="2025-09-12 22:55:42.058910065 +0000 UTC m=+73.729937395" Sep 12 22:55:42.060177 kubelet[2772]: I0912 22:55:42.059827 2772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-bd89899db-787v5" podStartSLOduration=5.9937672079999995 podStartE2EDuration="30.059806331s" podCreationTimestamp="2025-09-12 22:55:12 +0000 UTC" firstStartedPulling="2025-09-12 22:55:13.982010958 +0000 UTC m=+45.653038288" lastFinishedPulling="2025-09-12 22:55:38.048050081 +0000 UTC m=+69.719077411" observedRunningTime="2025-09-12 22:55:38.766039851 +0000 UTC m=+70.437067181" watchObservedRunningTime="2025-09-12 22:55:42.059806331 +0000 UTC m=+73.730833661" Sep 12 22:55:45.179291 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:39934.service - OpenSSH per-connection server daemon (10.0.0.1:39934). Sep 12 22:55:45.236954 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 39934 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:45.238532 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:45.243093 systemd-logind[1550]: New session 17 of user core. Sep 12 22:55:45.250553 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 22:55:45.394135 sshd[5433]: Connection closed by 10.0.0.1 port 39934 Sep 12 22:55:45.394567 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:45.399555 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:39934.service: Deactivated successfully. Sep 12 22:55:45.402381 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 22:55:45.403467 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Sep 12 22:55:45.404866 systemd-logind[1550]: Removed session 17. Sep 12 22:55:48.127843 containerd[1570]: time="2025-09-12T22:55:48.127778935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6\" id:\"1948dfc24a82901fd93627f1a650c6e608b519cf5ae0d6467593e1d94481bfb9\" pid:5460 exit_status:1 exited_at:{seconds:1757717748 nanos:127323511}" Sep 12 22:55:49.642304 kubelet[2772]: I0912 22:55:49.642125 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:55:50.411702 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:54826.service - OpenSSH per-connection server daemon (10.0.0.1:54826). Sep 12 22:55:50.505337 sshd[5476]: Accepted publickey for core from 10.0.0.1 port 54826 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:50.507444 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:50.513291 systemd-logind[1550]: New session 18 of user core. Sep 12 22:55:50.518489 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 22:55:50.783974 sshd[5479]: Connection closed by 10.0.0.1 port 54826 Sep 12 22:55:50.785453 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:50.796861 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Sep 12 22:55:50.797314 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:54826.service: Deactivated successfully. Sep 12 22:55:50.800228 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 22:55:50.802477 systemd-logind[1550]: Removed session 18. Sep 12 22:55:53.436873 kubelet[2772]: E0912 22:55:53.436822 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:55:55.798455 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:54828.service - OpenSSH per-connection server daemon (10.0.0.1:54828). Sep 12 22:55:55.851114 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 54828 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:55:55.856065 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:55:55.867044 systemd-logind[1550]: New session 19 of user core. Sep 12 22:55:55.873537 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 22:55:56.037524 sshd[5504]: Connection closed by 10.0.0.1 port 54828 Sep 12 22:55:56.037915 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Sep 12 22:55:56.045422 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:54828.service: Deactivated successfully. Sep 12 22:55:56.048410 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 22:55:56.049711 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Sep 12 22:55:56.051505 systemd-logind[1550]: Removed session 19. Sep 12 22:55:58.436398 kubelet[2772]: E0912 22:55:58.436340 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:00.801533 kubelet[2772]: I0912 22:56:00.801478 2772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 22:56:01.052772 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:49262.service - OpenSSH per-connection server daemon (10.0.0.1:49262). Sep 12 22:56:01.108850 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 49262 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:01.110818 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:01.115498 systemd-logind[1550]: New session 20 of user core. Sep 12 22:56:01.126422 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 22:56:01.258492 sshd[5524]: Connection closed by 10.0.0.1 port 49262 Sep 12 22:56:01.259035 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:01.269239 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:49262.service: Deactivated successfully. Sep 12 22:56:01.271429 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 22:56:01.272337 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Sep 12 22:56:01.275724 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:49276.service - OpenSSH per-connection server daemon (10.0.0.1:49276). Sep 12 22:56:01.276880 systemd-logind[1550]: Removed session 20. Sep 12 22:56:01.336201 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 49276 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:01.338144 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:01.343156 systemd-logind[1550]: New session 21 of user core. Sep 12 22:56:01.357589 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 22:56:01.743948 sshd[5540]: Connection closed by 10.0.0.1 port 49276 Sep 12 22:56:01.744691 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:01.759539 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:49276.service: Deactivated successfully. Sep 12 22:56:01.762293 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 22:56:01.763222 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Sep 12 22:56:01.767560 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:49288.service - OpenSSH per-connection server daemon (10.0.0.1:49288). Sep 12 22:56:01.768383 systemd-logind[1550]: Removed session 21. Sep 12 22:56:01.845019 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:01.846884 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:01.852599 systemd-logind[1550]: New session 22 of user core. Sep 12 22:56:01.863476 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 22:56:02.123665 containerd[1570]: time="2025-09-12T22:56:02.123617014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab\" id:\"dadd8cede11561ea43dfb69ba0fd0c05b2849974fb2cc2a9dca100592b4f4aaf\" pid:5580 exited_at:{seconds:1757717762 nanos:123155309}" Sep 12 22:56:02.180589 containerd[1570]: time="2025-09-12T22:56:02.180479154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585\" id:\"8ccb8fe01aa7e0ce532497723ece35004077f224abb3684145804c2f73d67ee3\" pid:5594 exited_at:{seconds:1757717762 nanos:180054349}" Sep 12 22:56:03.771398 sshd[5554]: Connection closed by 10.0.0.1 port 49288 Sep 12 22:56:03.771873 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:03.790811 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:49288.service: Deactivated successfully. Sep 12 22:56:03.795816 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 22:56:03.796129 systemd[1]: session-22.scope: Consumed 710ms CPU time, 73.1M memory peak. Sep 12 22:56:03.804699 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Sep 12 22:56:03.807223 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:49294.service - OpenSSH per-connection server daemon (10.0.0.1:49294). Sep 12 22:56:03.808659 systemd-logind[1550]: Removed session 22. Sep 12 22:56:03.870984 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 49294 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:03.872800 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:03.878638 systemd-logind[1550]: New session 23 of user core. Sep 12 22:56:03.886620 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 22:56:04.275633 sshd[5624]: Connection closed by 10.0.0.1 port 49294 Sep 12 22:56:04.275980 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:04.295963 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:49294.service: Deactivated successfully. Sep 12 22:56:04.298860 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 22:56:04.300893 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Sep 12 22:56:04.304595 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:49296.service - OpenSSH per-connection server daemon (10.0.0.1:49296). Sep 12 22:56:04.305504 systemd-logind[1550]: Removed session 23. Sep 12 22:56:04.368875 sshd[5636]: Accepted publickey for core from 10.0.0.1 port 49296 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:04.370862 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:04.377596 systemd-logind[1550]: New session 24 of user core. Sep 12 22:56:04.386676 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 22:56:04.517233 sshd[5639]: Connection closed by 10.0.0.1 port 49296 Sep 12 22:56:04.517716 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:04.522710 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:49296.service: Deactivated successfully. Sep 12 22:56:04.524933 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 22:56:04.526106 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Sep 12 22:56:04.528787 systemd-logind[1550]: Removed session 24. Sep 12 22:56:05.437050 kubelet[2772]: E0912 22:56:05.436978 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:09.545924 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:49328.service - OpenSSH per-connection server daemon (10.0.0.1:49328). Sep 12 22:56:09.621036 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 49328 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:09.623587 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:09.630973 systemd-logind[1550]: New session 25 of user core. Sep 12 22:56:09.642768 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 22:56:09.761592 sshd[5657]: Connection closed by 10.0.0.1 port 49328 Sep 12 22:56:09.762040 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:09.767467 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:49328.service: Deactivated successfully. Sep 12 22:56:09.769850 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 22:56:09.770765 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Sep 12 22:56:09.772103 systemd-logind[1550]: Removed session 25. Sep 12 22:56:11.895812 containerd[1570]: time="2025-09-12T22:56:11.895707769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19686e84b1932e8fcaa4c7cf8caa47a44a73ddf29492f674084ea2316f77c585\" id:\"ec87186ab4584ed35a117670101d8e3e533e095b3de5a36268a59e950c51d891\" pid:5685 exited_at:{seconds:1757717771 nanos:895213263}" Sep 12 22:56:14.779651 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:50020.service - OpenSSH per-connection server daemon (10.0.0.1:50020). Sep 12 22:56:14.836676 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 50020 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:14.840249 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:14.846686 systemd-logind[1550]: New session 26 of user core. Sep 12 22:56:14.861531 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 22:56:14.992652 sshd[5701]: Connection closed by 10.0.0.1 port 50020 Sep 12 22:56:14.993619 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:14.998908 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:50020.service: Deactivated successfully. Sep 12 22:56:15.001636 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 22:56:15.002602 systemd-logind[1550]: Session 26 logged out. Waiting for processes to exit. Sep 12 22:56:15.004063 systemd-logind[1550]: Removed session 26. Sep 12 22:56:17.443470 kubelet[2772]: E0912 22:56:17.441721 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:18.289571 containerd[1570]: time="2025-09-12T22:56:18.289450124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a01f25fed855e2f18891641059a2c88728904b0ab97d3174d42edf1b4a40bb6\" id:\"a23aecc03515b3092b8ccfa34ecafabdc7b94dcf012683043b8df131d385ccec\" pid:5726 exited_at:{seconds:1757717778 nanos:288848828}" Sep 12 22:56:20.009015 systemd[1]: Started sshd@26-10.0.0.34:22-10.0.0.1:55140.service - OpenSSH per-connection server daemon (10.0.0.1:55140). Sep 12 22:56:20.110223 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 55140 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:20.111808 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:20.118075 systemd-logind[1550]: New session 27 of user core. Sep 12 22:56:20.127662 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 22:56:20.365292 sshd[5743]: Connection closed by 10.0.0.1 port 55140 Sep 12 22:56:20.364436 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:20.370614 systemd-logind[1550]: Session 27 logged out. Waiting for processes to exit. Sep 12 22:56:20.372688 systemd[1]: sshd@26-10.0.0.34:22-10.0.0.1:55140.service: Deactivated successfully. Sep 12 22:56:20.378420 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 22:56:20.385613 systemd-logind[1550]: Removed session 27. Sep 12 22:56:21.436773 kubelet[2772]: E0912 22:56:21.436730 2772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 22:56:23.785472 containerd[1570]: time="2025-09-12T22:56:23.785411223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a4a65ecec35935590b336767df774fe93a0447e977cf17b6c9e10955c3102ab\" id:\"560d2f5db1dbf1b66d5909b91667e838f66d0de95ac2cee2313460ff1dfb6e59\" pid:5768 exited_at:{seconds:1757717783 nanos:784859090}" Sep 12 22:56:25.382545 systemd[1]: Started sshd@27-10.0.0.34:22-10.0.0.1:55160.service - OpenSSH per-connection server daemon (10.0.0.1:55160). Sep 12 22:56:25.475189 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 55160 ssh2: RSA SHA256:N+mk8ajQ5sQHtW3rGQ2ksMnDYLczCm2R/SNuDSDE5CE Sep 12 22:56:25.477478 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:56:25.487096 systemd-logind[1550]: New session 28 of user core. Sep 12 22:56:25.495530 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 22:56:25.641388 sshd[5783]: Connection closed by 10.0.0.1 port 55160 Sep 12 22:56:25.640372 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Sep 12 22:56:25.646023 systemd[1]: sshd@27-10.0.0.34:22-10.0.0.1:55160.service: Deactivated successfully. Sep 12 22:56:25.649101 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 22:56:25.651290 systemd-logind[1550]: Session 28 logged out. Waiting for processes to exit. Sep 12 22:56:25.653762 systemd-logind[1550]: Removed session 28.