Sep 12 05:47:51.845976 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 04:02:32 -00 2025 Sep 12 05:47:51.845999 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:47:51.846010 kernel: BIOS-provided physical RAM map: Sep 12 05:47:51.846017 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 05:47:51.846023 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 05:47:51.846030 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 05:47:51.846038 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 05:47:51.846044 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 05:47:51.846056 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 05:47:51.846067 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 05:47:51.846076 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 12 05:47:51.846093 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 05:47:51.846101 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 05:47:51.846109 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 05:47:51.846117 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 05:47:51.846127 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 05:47:51.846137 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 05:47:51.846145 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 05:47:51.846152 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 05:47:51.846159 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 05:47:51.846166 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 05:47:51.846173 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 05:47:51.846179 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 05:47:51.846186 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 05:47:51.846193 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 05:47:51.846202 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 05:47:51.846209 kernel: NX (Execute Disable) protection: active Sep 12 05:47:51.846216 kernel: APIC: Static calls initialized Sep 12 05:47:51.846223 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 12 05:47:51.846230 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 12 05:47:51.846237 kernel: extended physical RAM map: Sep 12 05:47:51.846244 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 05:47:51.846251 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 05:47:51.846258 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 05:47:51.846281 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 05:47:51.846302 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 05:47:51.846315 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 05:47:51.846322 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 05:47:51.846329 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 12 05:47:51.846336 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 12 05:47:51.846346 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 12 05:47:51.846354 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 12 05:47:51.846363 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 12 05:47:51.846370 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 05:47:51.846377 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 05:47:51.846387 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 05:47:51.846395 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 05:47:51.846402 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 05:47:51.846409 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 05:47:51.846417 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 05:47:51.846424 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 05:47:51.846431 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 05:47:51.846441 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 05:47:51.846448 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 05:47:51.846456 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 05:47:51.846502 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 05:47:51.846513 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 05:47:51.846522 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 05:47:51.846532 kernel: efi: EFI v2.7 by EDK II Sep 12 05:47:51.846540 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 12 05:47:51.846547 kernel: random: crng init done Sep 12 05:47:51.846556 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 12 05:47:51.846564 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 12 05:47:51.846577 kernel: secureboot: Secure boot disabled Sep 12 05:47:51.846584 kernel: SMBIOS 2.8 present. Sep 12 05:47:51.846591 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 05:47:51.846599 kernel: DMI: Memory slots populated: 1/1 Sep 12 05:47:51.846606 kernel: Hypervisor detected: KVM Sep 12 05:47:51.846613 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 05:47:51.846620 kernel: kvm-clock: using sched offset of 4295503748 cycles Sep 12 05:47:51.846628 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 05:47:51.846636 kernel: tsc: Detected 2794.748 MHz processor Sep 12 05:47:51.846644 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 05:47:51.846651 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 05:47:51.846661 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 12 05:47:51.846668 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 05:47:51.846676 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 05:47:51.846683 kernel: Using GB pages for direct mapping Sep 12 05:47:51.846691 kernel: ACPI: Early table checksum verification disabled Sep 12 05:47:51.846698 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 05:47:51.846706 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 05:47:51.846714 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:47:51.846721 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:47:51.846731 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 05:47:51.846738 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:47:51.846746 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:47:51.846753 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:47:51.846760 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 05:47:51.846768 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 05:47:51.846775 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 05:47:51.846783 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 05:47:51.846792 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 05:47:51.846800 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 05:47:51.846807 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 05:47:51.846815 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 05:47:51.846822 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 05:47:51.846829 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 05:47:51.846836 kernel: No NUMA configuration found Sep 12 05:47:51.846844 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 12 05:47:51.846851 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 12 05:47:51.846859 kernel: Zone ranges: Sep 12 05:47:51.846868 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 05:47:51.846876 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 12 05:47:51.846883 kernel: Normal empty Sep 12 05:47:51.846890 kernel: Device empty Sep 12 05:47:51.846897 kernel: Movable zone start for each node Sep 12 05:47:51.846905 kernel: Early memory node ranges Sep 12 05:47:51.846912 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 05:47:51.846920 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 05:47:51.846929 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 05:47:51.846939 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 12 05:47:51.846947 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 12 05:47:51.846954 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 12 05:47:51.846962 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 12 05:47:51.846969 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 12 05:47:51.846976 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 12 05:47:51.846984 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 05:47:51.846993 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 05:47:51.847009 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 05:47:51.847017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 05:47:51.847025 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 12 05:47:51.847033 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 12 05:47:51.847043 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 05:47:51.847053 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 05:47:51.847063 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 12 05:47:51.847074 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 05:47:51.847082 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 05:47:51.847102 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 05:47:51.847110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 05:47:51.847117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 05:47:51.847125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 05:47:51.847133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 05:47:51.847141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 05:47:51.847148 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 05:47:51.847156 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 05:47:51.847164 kernel: TSC deadline timer available Sep 12 05:47:51.847174 kernel: CPU topo: Max. logical packages: 1 Sep 12 05:47:51.847181 kernel: CPU topo: Max. logical dies: 1 Sep 12 05:47:51.847189 kernel: CPU topo: Max. dies per package: 1 Sep 12 05:47:51.847196 kernel: CPU topo: Max. threads per core: 1 Sep 12 05:47:51.847204 kernel: CPU topo: Num. cores per package: 4 Sep 12 05:47:51.847212 kernel: CPU topo: Num. threads per package: 4 Sep 12 05:47:51.847219 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 05:47:51.847227 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 05:47:51.847235 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 05:47:51.847244 kernel: kvm-guest: setup PV sched yield Sep 12 05:47:51.847252 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 05:47:51.847260 kernel: Booting paravirtualized kernel on KVM Sep 12 05:47:51.847268 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 05:47:51.847276 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 05:47:51.847283 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 05:47:51.847291 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 05:47:51.847299 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 05:47:51.847306 kernel: kvm-guest: PV spinlocks enabled Sep 12 05:47:51.847317 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 05:47:51.847326 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:47:51.847337 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 05:47:51.847345 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 05:47:51.847352 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 05:47:51.847360 kernel: Fallback order for Node 0: 0 Sep 12 05:47:51.847368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 12 05:47:51.847375 kernel: Policy zone: DMA32 Sep 12 05:47:51.847385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 05:47:51.847393 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 05:47:51.847407 kernel: ftrace: allocating 40123 entries in 157 pages Sep 12 05:47:51.847415 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 05:47:51.847422 kernel: Dynamic Preempt: voluntary Sep 12 05:47:51.847430 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 05:47:51.847439 kernel: rcu: RCU event tracing is enabled. Sep 12 05:47:51.847447 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 05:47:51.847454 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 05:47:51.847497 kernel: Rude variant of Tasks RCU enabled. Sep 12 05:47:51.847509 kernel: Tracing variant of Tasks RCU enabled. Sep 12 05:47:51.847517 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 05:47:51.847533 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 05:47:51.847541 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:47:51.847560 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:47:51.847569 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 05:47:51.847577 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 05:47:51.847585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 05:47:51.847593 kernel: Console: colour dummy device 80x25 Sep 12 05:47:51.847604 kernel: printk: legacy console [ttyS0] enabled Sep 12 05:47:51.847619 kernel: ACPI: Core revision 20240827 Sep 12 05:47:51.847637 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 05:47:51.847653 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 05:47:51.847671 kernel: x2apic enabled Sep 12 05:47:51.847688 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 05:47:51.847696 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 05:47:51.847704 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 05:47:51.847712 kernel: kvm-guest: setup PV IPIs Sep 12 05:47:51.847728 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 05:47:51.847747 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 05:47:51.847763 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 05:47:51.847772 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 05:47:51.847779 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 05:47:51.847787 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 05:47:51.847795 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 05:47:51.847803 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 05:47:51.847810 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 05:47:51.847821 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 05:47:51.847829 kernel: active return thunk: retbleed_return_thunk Sep 12 05:47:51.847836 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 05:47:51.847847 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 05:47:51.847854 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 05:47:51.847862 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 05:47:51.847871 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 05:47:51.847879 kernel: active return thunk: srso_return_thunk Sep 12 05:47:51.847889 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 05:47:51.847897 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 05:47:51.847904 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 05:47:51.847912 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 05:47:51.847920 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 05:47:51.847933 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 05:47:51.847941 kernel: Freeing SMP alternatives memory: 32K Sep 12 05:47:51.847949 kernel: pid_max: default: 32768 minimum: 301 Sep 12 05:47:51.847956 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 05:47:51.847967 kernel: landlock: Up and running. Sep 12 05:47:51.847975 kernel: SELinux: Initializing. Sep 12 05:47:51.847983 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 05:47:51.847991 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 05:47:51.847998 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 05:47:51.848006 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 05:47:51.848014 kernel: ... version: 0 Sep 12 05:47:51.848021 kernel: ... bit width: 48 Sep 12 05:47:51.848029 kernel: ... generic registers: 6 Sep 12 05:47:51.848039 kernel: ... value mask: 0000ffffffffffff Sep 12 05:47:51.848048 kernel: ... max period: 00007fffffffffff Sep 12 05:47:51.848058 kernel: ... fixed-purpose events: 0 Sep 12 05:47:51.848069 kernel: ... event mask: 000000000000003f Sep 12 05:47:51.848078 kernel: signal: max sigframe size: 1776 Sep 12 05:47:51.848093 kernel: rcu: Hierarchical SRCU implementation. Sep 12 05:47:51.848101 kernel: rcu: Max phase no-delay instances is 400. Sep 12 05:47:51.848111 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 05:47:51.848119 kernel: smp: Bringing up secondary CPUs ... Sep 12 05:47:51.848136 kernel: smpboot: x86: Booting SMP configuration: Sep 12 05:47:51.848145 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 05:47:51.848152 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 05:47:51.848160 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 05:47:51.848168 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2432K rwdata, 9988K rodata, 54092K init, 2872K bss, 137196K reserved, 0K cma-reserved) Sep 12 05:47:51.848176 kernel: devtmpfs: initialized Sep 12 05:47:51.848184 kernel: x86/mm: Memory block size: 128MB Sep 12 05:47:51.848191 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 05:47:51.848199 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 05:47:51.848216 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 12 05:47:51.848235 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 05:47:51.848245 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 12 05:47:51.848253 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 05:47:51.848268 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 05:47:51.848277 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 05:47:51.848287 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 05:47:51.848295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 05:47:51.848313 kernel: audit: initializing netlink subsys (disabled) Sep 12 05:47:51.848343 kernel: audit: type=2000 audit(1757656069.093:1): state=initialized audit_enabled=0 res=1 Sep 12 05:47:51.848364 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 05:47:51.848381 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 05:47:51.848394 kernel: cpuidle: using governor menu Sep 12 05:47:51.848409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 05:47:51.848417 kernel: dca service started, version 1.12.1 Sep 12 05:47:51.848425 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 05:47:51.848433 kernel: PCI: Using configuration type 1 for base access Sep 12 05:47:51.848441 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 05:47:51.848451 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 05:47:51.848483 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 05:47:51.848494 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 05:47:51.848502 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 05:47:51.848510 kernel: ACPI: Added _OSI(Module Device) Sep 12 05:47:51.848517 kernel: ACPI: Added _OSI(Processor Device) Sep 12 05:47:51.848525 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 05:47:51.848533 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 05:47:51.848540 kernel: ACPI: Interpreter enabled Sep 12 05:47:51.848551 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 05:47:51.848559 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 05:47:51.848567 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 05:47:51.848575 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 05:47:51.848582 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 05:47:51.848590 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 05:47:51.848834 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 05:47:51.848980 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 05:47:51.849125 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 05:47:51.849144 kernel: PCI host bridge to bus 0000:00 Sep 12 05:47:51.849321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 05:47:51.849448 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 05:47:51.849627 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 05:47:51.849770 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 05:47:51.849903 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 05:47:51.850050 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 05:47:51.850231 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 05:47:51.850423 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 05:47:51.850593 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 05:47:51.850728 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 05:47:51.850859 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 05:47:51.850996 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 05:47:51.851160 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 05:47:51.851352 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 05:47:51.851554 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 05:47:51.851738 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 05:47:51.851908 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 05:47:51.852097 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 05:47:51.852250 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 05:47:51.852400 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 05:47:51.852584 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 05:47:51.852776 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 05:47:51.852929 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 05:47:51.853101 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 05:47:51.853251 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 05:47:51.853413 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 05:47:51.853604 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 05:47:51.853763 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 05:47:51.853963 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 05:47:51.854118 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 05:47:51.854265 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 05:47:51.854438 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 05:47:51.854611 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 05:47:51.854627 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 05:47:51.854646 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 05:47:51.854657 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 05:47:51.854669 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 05:47:51.854679 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 05:47:51.854688 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 05:47:51.854703 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 05:47:51.854713 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 05:47:51.854723 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 05:47:51.854733 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 05:47:51.854743 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 05:47:51.854752 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 05:47:51.854763 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 05:47:51.854772 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 05:47:51.854783 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 05:47:51.854796 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 05:47:51.854807 kernel: iommu: Default domain type: Translated Sep 12 05:47:51.854816 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 05:47:51.854826 kernel: efivars: Registered efivars operations Sep 12 05:47:51.854836 kernel: PCI: Using ACPI for IRQ routing Sep 12 05:47:51.854846 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 05:47:51.854856 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 05:47:51.854866 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 12 05:47:51.854889 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 12 05:47:51.854902 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 12 05:47:51.854920 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 12 05:47:51.854933 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 12 05:47:51.854953 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 12 05:47:51.854964 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 12 05:47:51.855184 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 05:47:51.855334 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 05:47:51.856334 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 05:47:51.856361 kernel: vgaarb: loaded Sep 12 05:47:51.856372 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 05:47:51.856388 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 05:47:51.856409 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 05:47:51.856430 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 05:47:51.856446 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 05:47:51.856457 kernel: pnp: PnP ACPI init Sep 12 05:47:51.856737 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 05:47:51.856763 kernel: pnp: PnP ACPI: found 6 devices Sep 12 05:47:51.856774 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 05:47:51.856794 kernel: NET: Registered PF_INET protocol family Sep 12 05:47:51.856810 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 05:47:51.857717 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 05:47:51.857728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 05:47:51.857739 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 05:47:51.857751 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 05:47:51.857761 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 05:47:51.857777 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 05:47:51.857788 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 05:47:51.857799 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 05:47:51.857810 kernel: NET: Registered PF_XDP protocol family Sep 12 05:47:51.857981 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 05:47:51.858149 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 05:47:51.858296 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 05:47:51.858435 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 05:47:51.858602 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 05:47:51.858731 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 05:47:51.858875 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 05:47:51.859004 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 05:47:51.859018 kernel: PCI: CLS 0 bytes, default 64 Sep 12 05:47:51.859029 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 05:47:51.859040 kernel: Initialise system trusted keyrings Sep 12 05:47:51.859055 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 05:47:51.859065 kernel: Key type asymmetric registered Sep 12 05:47:51.859075 kernel: Asymmetric key parser 'x509' registered Sep 12 05:47:51.859093 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 05:47:51.859104 kernel: io scheduler mq-deadline registered Sep 12 05:47:51.859115 kernel: io scheduler kyber registered Sep 12 05:47:51.859125 kernel: io scheduler bfq registered Sep 12 05:47:51.859138 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 05:47:51.859149 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 05:47:51.859160 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 05:47:51.859170 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 05:47:51.859180 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 05:47:51.859191 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 05:47:51.859201 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 05:47:51.859212 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 05:47:51.859222 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 05:47:51.859398 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 05:47:51.859415 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 05:47:51.859592 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 05:47:51.859726 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T05:47:51 UTC (1757656071) Sep 12 05:47:51.859914 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 05:47:51.859932 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 05:47:51.859943 kernel: efifb: probing for efifb Sep 12 05:47:51.859953 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 05:47:51.859969 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 05:47:51.859979 kernel: efifb: scrolling: redraw Sep 12 05:47:51.859994 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 05:47:51.860006 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 05:47:51.860016 kernel: fb0: EFI VGA frame buffer device Sep 12 05:47:51.860027 kernel: pstore: Using crash dump compression: deflate Sep 12 05:47:51.860038 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 05:47:51.860049 kernel: NET: Registered PF_INET6 protocol family Sep 12 05:47:51.860059 kernel: Segment Routing with IPv6 Sep 12 05:47:51.860073 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 05:47:51.860095 kernel: NET: Registered PF_PACKET protocol family Sep 12 05:47:51.860106 kernel: Key type dns_resolver registered Sep 12 05:47:51.860117 kernel: IPI shorthand broadcast: enabled Sep 12 05:47:51.860128 kernel: sched_clock: Marking stable (3413001813, 165121549)->(3609588327, -31464965) Sep 12 05:47:51.860138 kernel: registered taskstats version 1 Sep 12 05:47:51.860149 kernel: Loading compiled-in X.509 certificates Sep 12 05:47:51.860160 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: c974434132f0296e0aaf9b1358c8dc50eba5c8b9' Sep 12 05:47:51.860170 kernel: Demotion targets for Node 0: null Sep 12 05:47:51.860183 kernel: Key type .fscrypt registered Sep 12 05:47:51.860194 kernel: Key type fscrypt-provisioning registered Sep 12 05:47:51.860205 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 05:47:51.860215 kernel: ima: Allocated hash algorithm: sha1 Sep 12 05:47:51.860226 kernel: ima: No architecture policies found Sep 12 05:47:51.860236 kernel: clk: Disabling unused clocks Sep 12 05:47:51.860246 kernel: Warning: unable to open an initial console. Sep 12 05:47:51.860257 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 12 05:47:51.860268 kernel: Write protecting the kernel read-only data: 24576k Sep 12 05:47:51.860281 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 12 05:47:51.860292 kernel: Run /init as init process Sep 12 05:47:51.860303 kernel: with arguments: Sep 12 05:47:51.860315 kernel: /init Sep 12 05:47:51.860326 kernel: with environment: Sep 12 05:47:51.860336 kernel: HOME=/ Sep 12 05:47:51.860347 kernel: TERM=linux Sep 12 05:47:51.860357 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 05:47:51.860374 systemd[1]: Successfully made /usr/ read-only. Sep 12 05:47:51.860391 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:47:51.860402 systemd[1]: Detected virtualization kvm. Sep 12 05:47:51.860413 systemd[1]: Detected architecture x86-64. Sep 12 05:47:51.860423 systemd[1]: Running in initrd. Sep 12 05:47:51.860434 systemd[1]: No hostname configured, using default hostname. Sep 12 05:47:51.860448 systemd[1]: Hostname set to . Sep 12 05:47:51.860460 systemd[1]: Initializing machine ID from VM UUID. Sep 12 05:47:51.860495 systemd[1]: Queued start job for default target initrd.target. Sep 12 05:47:51.860508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:47:51.860520 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:47:51.860532 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 05:47:51.860544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:47:51.860556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 05:47:51.860569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 05:47:51.860586 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 05:47:51.860598 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 05:47:51.860611 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:47:51.860621 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:47:51.860633 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:47:51.860645 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:47:51.860656 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:47:51.860668 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:47:51.860683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:47:51.860695 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:47:51.860707 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 05:47:51.860719 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 05:47:51.860730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:47:51.860742 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:47:51.860754 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:47:51.860765 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:47:51.860777 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 05:47:51.860792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:47:51.860804 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 05:47:51.860816 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 05:47:51.860827 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 05:47:51.860842 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:47:51.860854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:47:51.860866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:47:51.860878 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 05:47:51.860893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:47:51.860905 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 05:47:51.860975 systemd-journald[219]: Collecting audit messages is disabled. Sep 12 05:47:51.861013 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 05:47:51.861025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:47:51.861040 systemd-journald[219]: Journal started Sep 12 05:47:51.861069 systemd-journald[219]: Runtime Journal (/run/log/journal/b405659375c34a2194af6c0a92e069a0) is 6M, max 48.4M, 42.4M free. Sep 12 05:47:51.857759 systemd-modules-load[221]: Inserted module 'overlay' Sep 12 05:47:51.863130 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:47:51.866512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:47:51.869023 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 05:47:51.873854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:47:51.875062 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:47:51.895486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 05:47:51.897937 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 12 05:47:51.899283 kernel: Bridge firewalling registered Sep 12 05:47:51.900063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:47:51.901634 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 05:47:51.902639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:47:51.904461 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:47:51.911399 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:47:51.917856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:47:51.918542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:47:51.919825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 05:47:51.922152 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:47:51.960219 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d36684c42387dba16669740eb40ca6a094be0dfb03f64a303630b6ac6cfe48d3 Sep 12 05:47:51.982900 systemd-resolved[262]: Positive Trust Anchors: Sep 12 05:47:51.982918 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:47:51.982967 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:47:51.987023 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 12 05:47:51.988698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:47:51.996902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:47:52.107535 kernel: SCSI subsystem initialized Sep 12 05:47:52.116510 kernel: Loading iSCSI transport class v2.0-870. Sep 12 05:47:52.127524 kernel: iscsi: registered transport (tcp) Sep 12 05:47:52.153901 kernel: iscsi: registered transport (qla4xxx) Sep 12 05:47:52.153992 kernel: QLogic iSCSI HBA Driver Sep 12 05:47:52.179222 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:47:52.196871 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:47:52.198426 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:47:52.266669 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 05:47:52.269604 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 05:47:52.337532 kernel: raid6: avx2x4 gen() 23922 MB/s Sep 12 05:47:52.354509 kernel: raid6: avx2x2 gen() 26673 MB/s Sep 12 05:47:52.371540 kernel: raid6: avx2x1 gen() 25795 MB/s Sep 12 05:47:52.371574 kernel: raid6: using algorithm avx2x2 gen() 26673 MB/s Sep 12 05:47:52.389538 kernel: raid6: .... xor() 19872 MB/s, rmw enabled Sep 12 05:47:52.389566 kernel: raid6: using avx2x2 recovery algorithm Sep 12 05:47:52.410503 kernel: xor: automatically using best checksumming function avx Sep 12 05:47:52.650544 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 05:47:52.662087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:47:52.665339 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:47:52.698273 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 05:47:52.704291 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:47:52.718045 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 05:47:52.758319 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Sep 12 05:47:52.805238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:47:52.808873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:47:52.889993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:47:52.892893 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 05:47:52.940513 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 05:47:52.946622 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 05:47:52.953509 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 05:47:52.956260 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 05:47:52.956278 kernel: GPT:9289727 != 19775487 Sep 12 05:47:52.956289 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 05:47:52.956299 kernel: GPT:9289727 != 19775487 Sep 12 05:47:52.956310 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 05:47:52.957541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:47:52.964506 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 05:47:52.976516 kernel: AES CTR mode by8 optimization enabled Sep 12 05:47:52.976572 kernel: libata version 3.00 loaded. Sep 12 05:47:52.977810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:47:52.978638 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:47:52.985367 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:47:52.993039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:47:52.995266 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:47:53.003502 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 05:47:53.003817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:47:53.007572 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 05:47:53.007608 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 05:47:53.007911 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 05:47:53.005346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:47:53.013430 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 05:47:53.014518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:47:53.018531 kernel: scsi host0: ahci Sep 12 05:47:53.018742 kernel: scsi host1: ahci Sep 12 05:47:53.018891 kernel: scsi host2: ahci Sep 12 05:47:53.020677 kernel: scsi host3: ahci Sep 12 05:47:53.021222 kernel: scsi host4: ahci Sep 12 05:47:53.023275 kernel: scsi host5: ahci Sep 12 05:47:53.023453 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 05:47:53.023486 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 05:47:53.025066 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 05:47:53.025087 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 05:47:53.026080 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 05:47:53.028458 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 05:47:53.042625 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 05:47:53.066677 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:47:53.076030 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 05:47:53.084763 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 05:47:53.092032 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 05:47:53.092497 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 05:47:53.093867 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 05:47:53.130276 disk-uuid[636]: Primary Header is updated. Sep 12 05:47:53.130276 disk-uuid[636]: Secondary Entries is updated. Sep 12 05:47:53.130276 disk-uuid[636]: Secondary Header is updated. Sep 12 05:47:53.135130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:47:53.141487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:47:53.339584 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 05:47:53.339674 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 05:47:53.339685 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 05:47:53.339695 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 05:47:53.340491 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 05:47:53.341530 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 05:47:53.341543 kernel: ata3.00: applying bridge limits Sep 12 05:47:53.342496 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 05:47:53.343486 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 05:47:53.344592 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 05:47:53.344609 kernel: ata3.00: configured for UDMA/100 Sep 12 05:47:53.345501 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 05:47:53.403521 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 05:47:53.403867 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 05:47:53.417623 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 05:47:53.779832 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 05:47:53.781708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:47:53.783186 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:47:53.784342 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:47:53.786330 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 05:47:53.816702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:47:54.141831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 05:47:54.141898 disk-uuid[637]: The operation has completed successfully. Sep 12 05:47:54.177924 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 05:47:54.178064 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 05:47:54.208135 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 05:47:54.227560 sh[665]: Success Sep 12 05:47:54.259138 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 05:47:54.259224 kernel: device-mapper: uevent: version 1.0.3 Sep 12 05:47:54.260324 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 05:47:54.271508 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 05:47:54.303873 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 05:47:54.306882 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 05:47:54.326738 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 05:47:54.351492 kernel: BTRFS: device fsid 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (677) Sep 12 05:47:54.354031 kernel: BTRFS info (device dm-0): first mount of filesystem 29ae74b1-0ab1-4a84-96e7-98d98e1ec77f Sep 12 05:47:54.354053 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:47:54.381951 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 05:47:54.382035 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 05:47:54.394336 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 05:47:54.411365 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:47:54.412432 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 05:47:54.414415 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 05:47:54.415881 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 05:47:54.450512 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 12 05:47:54.453353 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:47:54.453385 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:47:54.456692 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:47:54.456758 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:47:54.464509 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:47:54.465308 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 05:47:54.467600 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 05:47:54.554259 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:47:54.557284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:47:54.610351 systemd-networkd[846]: lo: Link UP Sep 12 05:47:54.610362 systemd-networkd[846]: lo: Gained carrier Sep 12 05:47:54.612596 systemd-networkd[846]: Enumeration completed Sep 12 05:47:54.613078 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:47:54.613084 systemd-networkd[846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 05:47:54.614514 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:47:54.614579 systemd-networkd[846]: eth0: Link UP Sep 12 05:47:54.614816 systemd-networkd[846]: eth0: Gained carrier Sep 12 05:47:54.614828 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:47:54.620180 systemd[1]: Reached target network.target - Network. Sep 12 05:47:54.637551 systemd-networkd[846]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 05:47:54.961608 ignition[757]: Ignition 2.22.0 Sep 12 05:47:54.961628 ignition[757]: Stage: fetch-offline Sep 12 05:47:54.961705 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:47:54.962279 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:47:54.962422 ignition[757]: parsed url from cmdline: "" Sep 12 05:47:54.962426 ignition[757]: no config URL provided Sep 12 05:47:54.962433 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 05:47:54.962445 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 12 05:47:54.962502 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 12 05:47:54.962509 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 05:47:54.979105 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 12 05:47:55.017651 ignition[757]: parsing config with SHA512: e7074611259f7b3a0e6ad405883948cad7cf0b0aac031258c6ca7faa3e9f2b8d86fdddbdbcc550370db09abcbd843b7fb9960a12fe541c59d7b3b219766fb6b8 Sep 12 05:47:55.021446 unknown[757]: fetched base config from "system" Sep 12 05:47:55.021480 unknown[757]: fetched user config from "qemu" Sep 12 05:47:55.021916 ignition[757]: fetch-offline: fetch-offline passed Sep 12 05:47:55.021970 ignition[757]: Ignition finished successfully Sep 12 05:47:55.025542 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:47:55.028378 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 05:47:55.031273 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 05:47:55.076145 ignition[860]: Ignition 2.22.0 Sep 12 05:47:55.076160 ignition[860]: Stage: kargs Sep 12 05:47:55.076363 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:47:55.076378 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:47:55.077306 ignition[860]: kargs: kargs passed Sep 12 05:47:55.077386 ignition[860]: Ignition finished successfully Sep 12 05:47:55.085663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 05:47:55.087400 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 05:47:55.146507 ignition[868]: Ignition 2.22.0 Sep 12 05:47:55.146524 ignition[868]: Stage: disks Sep 12 05:47:55.147762 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 12 05:47:55.149321 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:47:55.151780 ignition[868]: disks: disks passed Sep 12 05:47:55.151855 ignition[868]: Ignition finished successfully Sep 12 05:47:55.155411 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 05:47:55.156104 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 05:47:55.157932 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 05:47:55.158309 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:47:55.158865 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:47:55.159251 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:47:55.160867 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 05:47:55.199354 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 05:47:55.207557 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 05:47:55.212553 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 05:47:55.371506 kernel: EXT4-fs (vda9): mounted filesystem 2b8062f9-897a-46cb-bde4-2b62ba4cc712 r/w with ordered data mode. Quota mode: none. Sep 12 05:47:55.372307 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 05:47:55.374332 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 05:47:55.377894 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:47:55.380351 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 05:47:55.382217 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 05:47:55.382266 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 05:47:55.383912 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:47:55.397019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 05:47:55.399950 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 05:47:55.404020 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 12 05:47:55.404044 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:47:55.404055 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:47:55.406868 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:47:55.406891 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:47:55.408207 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:47:55.441688 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 05:47:55.446195 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 12 05:47:55.451138 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 05:47:55.456120 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 05:47:55.560697 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 05:47:55.562523 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 05:47:55.565633 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 05:47:55.583113 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 05:47:55.584932 kernel: BTRFS info (device vda6): last unmount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:47:55.600688 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 05:47:55.739821 ignition[999]: INFO : Ignition 2.22.0 Sep 12 05:47:55.739821 ignition[999]: INFO : Stage: mount Sep 12 05:47:55.741889 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:47:55.741889 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:47:55.744240 ignition[999]: INFO : mount: mount passed Sep 12 05:47:55.744240 ignition[999]: INFO : Ignition finished successfully Sep 12 05:47:55.748748 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 05:47:55.752013 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 05:47:56.374274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 05:47:56.404049 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 12 05:47:56.404121 kernel: BTRFS info (device vda6): first mount of filesystem 88e8cff7-d302-45f0-bf99-3731957f99ae Sep 12 05:47:56.404136 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 05:47:56.408491 kernel: BTRFS info (device vda6): turning on async discard Sep 12 05:47:56.408527 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 05:47:56.409952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 05:47:56.468553 ignition[1029]: INFO : Ignition 2.22.0 Sep 12 05:47:56.468553 ignition[1029]: INFO : Stage: files Sep 12 05:47:56.470402 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:47:56.470402 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:47:56.473587 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 12 05:47:56.475647 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 05:47:56.475647 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 05:47:56.480622 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 05:47:56.482298 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 05:47:56.482298 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 05:47:56.481446 unknown[1029]: wrote ssh authorized keys file for user: core Sep 12 05:47:56.486569 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 05:47:56.486569 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 05:47:56.515619 systemd-networkd[846]: eth0: Gained IPv6LL Sep 12 05:47:56.524316 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 05:47:56.653453 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 05:47:56.653453 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 05:47:56.657638 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 05:47:56.657638 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:47:56.657638 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 05:47:56.657638 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:47:56.657638 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 05:47:56.657638 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:47:56.657638 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 05:47:56.669580 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:47:56.669580 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 05:47:56.669580 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:47:56.669580 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:47:56.669580 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:47:56.669580 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 05:47:57.188873 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 05:47:58.762449 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 05:47:58.762449 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 05:47:58.767076 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:47:58.939735 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 05:47:58.939735 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 05:47:58.939735 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 05:47:58.939735 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:47:58.948100 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 05:47:58.948100 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 05:47:58.948100 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 05:47:58.964019 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:47:58.970459 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 05:47:58.972106 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 05:47:58.972106 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 05:47:58.972106 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 05:47:58.972106 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:47:58.972106 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 05:47:58.972106 ignition[1029]: INFO : files: files passed Sep 12 05:47:58.972106 ignition[1029]: INFO : Ignition finished successfully Sep 12 05:47:58.982446 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 05:47:58.985671 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 05:47:58.988339 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 05:47:59.015608 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 05:47:59.015760 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 05:47:59.020077 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 05:47:59.023821 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:47:59.025449 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:47:59.027068 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 05:47:59.029933 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:47:59.030663 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 05:47:59.033649 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 05:47:59.081445 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 05:47:59.081610 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 05:47:59.082303 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 05:47:59.086831 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 05:47:59.087367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 05:47:59.088613 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 05:47:59.128454 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:47:59.151171 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 05:47:59.175922 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:47:59.176361 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:47:59.176753 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 05:47:59.177102 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 05:47:59.177258 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 05:47:59.178267 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 05:47:59.178721 systemd[1]: Stopped target basic.target - Basic System. Sep 12 05:47:59.179042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 05:47:59.179388 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 05:47:59.179712 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 05:47:59.180043 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 05:47:59.180357 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 05:47:59.180831 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 05:47:59.181183 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 05:47:59.181513 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 05:47:59.181817 systemd[1]: Stopped target swap.target - Swaps. Sep 12 05:47:59.182119 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 05:47:59.182238 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 05:47:59.182948 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:47:59.183268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:47:59.183713 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 05:47:59.183894 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:47:59.184233 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 05:47:59.239831 ignition[1084]: INFO : Ignition 2.22.0 Sep 12 05:47:59.239831 ignition[1084]: INFO : Stage: umount Sep 12 05:47:59.239831 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 05:47:59.239831 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 05:47:59.239831 ignition[1084]: INFO : umount: umount passed Sep 12 05:47:59.239831 ignition[1084]: INFO : Ignition finished successfully Sep 12 05:47:59.184343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 05:47:59.185070 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 05:47:59.185197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 05:47:59.185859 systemd[1]: Stopped target paths.target - Path Units. Sep 12 05:47:59.186278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 05:47:59.190643 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:47:59.191052 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 05:47:59.191567 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 05:47:59.191998 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 05:47:59.192121 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 05:47:59.192455 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 05:47:59.192566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 05:47:59.193023 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 05:47:59.193147 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 05:47:59.193537 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 05:47:59.193642 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 05:47:59.195053 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 05:47:59.197643 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 05:47:59.198109 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 05:47:59.198239 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:47:59.198572 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 05:47:59.198681 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 05:47:59.207092 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 05:47:59.207267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 05:47:59.235850 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 05:47:59.240969 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 05:47:59.241099 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 05:47:59.243025 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 05:47:59.243143 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 05:47:59.245573 systemd[1]: Stopped target network.target - Network. Sep 12 05:47:59.246799 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 05:47:59.246861 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 05:47:59.247172 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 05:47:59.247235 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 05:47:59.247513 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 05:47:59.247566 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 05:47:59.248255 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 05:47:59.248303 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 05:47:59.248902 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 05:47:59.248951 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 05:47:59.249588 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 05:47:59.249954 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 05:47:59.262030 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 05:47:59.262268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 05:47:59.268800 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 05:47:59.269114 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 05:47:59.269253 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 05:47:59.273403 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 05:47:59.274582 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 05:47:59.275027 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 05:47:59.275156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:47:59.279680 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 05:47:59.280730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 05:47:59.280805 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 05:47:59.281194 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 05:47:59.281274 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:47:59.287556 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 05:47:59.287641 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 05:47:59.287887 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 05:47:59.287935 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:47:59.291816 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:47:59.293434 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 05:47:59.293522 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:47:59.317240 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 05:47:59.317530 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:47:59.318588 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 05:47:59.318651 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 05:47:59.320869 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 05:47:59.320933 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:47:59.322729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 05:47:59.322782 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 05:47:59.323420 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 05:47:59.323497 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 05:47:59.324241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 05:47:59.324307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 05:47:59.334398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 05:47:59.337515 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 05:47:59.337611 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:47:59.341121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 05:47:59.341221 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:47:59.344289 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 05:47:59.344353 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:47:59.347297 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 05:47:59.347346 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:47:59.347912 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 05:47:59.347966 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:47:59.354406 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 05:47:59.354533 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 05:47:59.354590 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 05:47:59.354646 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 05:47:59.355014 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 05:47:59.355616 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 05:47:59.368400 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 05:47:59.368628 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 05:47:59.369286 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 05:47:59.374599 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 05:47:59.414556 systemd[1]: Switching root. Sep 12 05:47:59.465792 systemd-journald[219]: Journal stopped Sep 12 05:48:00.749537 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 12 05:48:00.749616 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 05:48:00.749647 kernel: SELinux: policy capability open_perms=1 Sep 12 05:48:00.749671 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 05:48:00.749687 kernel: SELinux: policy capability always_check_network=0 Sep 12 05:48:00.749702 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 05:48:00.749724 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 05:48:00.749740 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 05:48:00.749755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 05:48:00.749770 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 05:48:00.749792 kernel: audit: type=1403 audit(1757656079.820:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 05:48:00.749815 systemd[1]: Successfully loaded SELinux policy in 68.095ms. Sep 12 05:48:00.749845 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.559ms. Sep 12 05:48:00.749862 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 05:48:00.749877 systemd[1]: Detected virtualization kvm. Sep 12 05:48:00.749894 systemd[1]: Detected architecture x86-64. Sep 12 05:48:00.749910 systemd[1]: Detected first boot. Sep 12 05:48:00.749926 systemd[1]: Initializing machine ID from VM UUID. Sep 12 05:48:00.749943 zram_generator::config[1129]: No configuration found. Sep 12 05:48:00.749960 kernel: Guest personality initialized and is inactive Sep 12 05:48:00.749984 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 05:48:00.750000 kernel: Initialized host personality Sep 12 05:48:00.750015 kernel: NET: Registered PF_VSOCK protocol family Sep 12 05:48:00.750031 systemd[1]: Populated /etc with preset unit settings. Sep 12 05:48:00.750049 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 05:48:00.750066 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 05:48:00.750082 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 05:48:00.750105 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 05:48:00.750129 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 05:48:00.750147 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 05:48:00.750164 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 05:48:00.750186 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 05:48:00.750203 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 05:48:00.750220 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 05:48:00.750237 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 05:48:00.750253 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 05:48:00.750269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 05:48:00.750294 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 05:48:00.750310 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 05:48:00.750326 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 05:48:00.750343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 05:48:00.750360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 05:48:00.750453 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 05:48:00.750495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 05:48:00.750513 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 05:48:00.750542 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 05:48:00.750561 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 05:48:00.750578 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 05:48:00.750595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 05:48:00.750611 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 05:48:00.750627 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 05:48:00.750643 systemd[1]: Reached target slices.target - Slice Units. Sep 12 05:48:00.750658 systemd[1]: Reached target swap.target - Swaps. Sep 12 05:48:00.750675 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 05:48:00.750702 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 05:48:00.750780 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 05:48:00.750809 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 05:48:00.750826 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 05:48:00.750853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 05:48:00.750871 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 05:48:00.750887 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 05:48:00.750903 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 05:48:00.750919 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 05:48:00.750945 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:00.750962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 05:48:00.750978 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 05:48:00.750994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 05:48:00.751011 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 05:48:00.751028 systemd[1]: Reached target machines.target - Containers. Sep 12 05:48:00.751045 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 05:48:00.751061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:48:00.751085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 05:48:00.751102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 05:48:00.751117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:48:00.751133 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:48:00.751149 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:48:00.751165 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 05:48:00.751181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:48:00.751198 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 05:48:00.751218 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 05:48:00.751241 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 05:48:00.751258 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 05:48:00.751275 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 05:48:00.751292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:48:00.751308 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 05:48:00.751324 kernel: loop: module loaded Sep 12 05:48:00.751341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 05:48:00.751357 kernel: fuse: init (API version 7.41) Sep 12 05:48:00.751380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 05:48:00.751398 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 05:48:00.751414 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 05:48:00.751437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 05:48:00.751459 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 05:48:00.751495 systemd[1]: Stopped verity-setup.service. Sep 12 05:48:00.751513 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:00.751529 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 05:48:00.751546 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 05:48:00.751562 kernel: ACPI: bus type drm_connector registered Sep 12 05:48:00.751578 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 05:48:00.751603 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 05:48:00.751619 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 05:48:00.751636 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 05:48:00.751653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 05:48:00.751699 systemd-journald[1193]: Collecting audit messages is disabled. Sep 12 05:48:00.751730 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 05:48:00.751747 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 05:48:00.751771 systemd-journald[1193]: Journal started Sep 12 05:48:00.751800 systemd-journald[1193]: Runtime Journal (/run/log/journal/b405659375c34a2194af6c0a92e069a0) is 6M, max 48.4M, 42.4M free. Sep 12 05:48:00.402425 systemd[1]: Queued start job for default target multi-user.target. Sep 12 05:48:00.427104 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 05:48:00.427620 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 05:48:00.754508 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 05:48:00.756153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:48:00.756371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:48:00.763416 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:48:00.763651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:48:00.765020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:48:00.765235 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:48:00.766794 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 05:48:00.767016 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 05:48:00.768337 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:48:00.768705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:48:00.770179 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 05:48:00.771626 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 05:48:00.773144 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 05:48:00.774966 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 05:48:00.788634 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 05:48:00.791080 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 05:48:00.796342 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 05:48:00.797618 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 05:48:00.797650 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 05:48:00.799744 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 05:48:00.806618 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 05:48:00.808079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:48:00.811438 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 05:48:00.815168 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 05:48:00.817399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:48:00.820130 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 05:48:00.821539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:48:00.825574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 05:48:00.829560 systemd-journald[1193]: Time spent on flushing to /var/log/journal/b405659375c34a2194af6c0a92e069a0 is 27.338ms for 1071 entries. Sep 12 05:48:00.829560 systemd-journald[1193]: System Journal (/var/log/journal/b405659375c34a2194af6c0a92e069a0) is 8M, max 195.6M, 187.6M free. Sep 12 05:48:01.024030 systemd-journald[1193]: Received client request to flush runtime journal. Sep 12 05:48:01.024074 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 05:48:01.024102 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 05:48:01.024116 kernel: loop1: detected capacity change from 0 to 110984 Sep 12 05:48:00.831365 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 05:48:00.848820 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 05:48:00.897020 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 05:48:00.899020 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 05:48:00.900774 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 05:48:00.902369 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 05:48:00.921372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 05:48:00.973566 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 05:48:00.973580 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 05:48:00.979672 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 05:48:00.984634 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 05:48:01.025874 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 05:48:01.027616 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 05:48:01.031808 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 05:48:01.034274 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 05:48:01.100499 kernel: loop2: detected capacity change from 0 to 229808 Sep 12 05:48:01.165105 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 05:48:01.167687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 05:48:01.214498 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 05:48:01.222702 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 12 05:48:01.222721 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Sep 12 05:48:01.278099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 05:48:01.286505 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 05:48:01.379498 kernel: loop5: detected capacity change from 0 to 229808 Sep 12 05:48:01.414635 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 05:48:01.415254 (sd-merge)[1270]: Merged extensions into '/usr'. Sep 12 05:48:01.422709 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 05:48:01.422731 systemd[1]: Reloading... Sep 12 05:48:01.550510 zram_generator::config[1305]: No configuration found. Sep 12 05:48:01.753106 systemd[1]: Reloading finished in 329 ms. Sep 12 05:48:01.757112 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 05:48:01.779092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 05:48:01.799459 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 05:48:01.801218 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 05:48:01.802988 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 05:48:01.812884 systemd[1]: Starting ensure-sysext.service... Sep 12 05:48:01.815766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 05:48:01.835132 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 12 05:48:01.835153 systemd[1]: Reloading... Sep 12 05:48:01.848230 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 05:48:01.848283 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 05:48:01.849310 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 05:48:01.849738 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 05:48:01.851127 systemd-tmpfiles[1338]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 05:48:01.851770 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 05:48:01.851967 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 05:48:01.857248 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:48:01.857349 systemd-tmpfiles[1338]: Skipping /boot Sep 12 05:48:01.876978 systemd-tmpfiles[1338]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 05:48:01.877136 systemd-tmpfiles[1338]: Skipping /boot Sep 12 05:48:01.900511 zram_generator::config[1366]: No configuration found. Sep 12 05:48:02.088316 systemd[1]: Reloading finished in 252 ms. Sep 12 05:48:02.112161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 05:48:02.135087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 05:48:02.144600 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:48:02.147699 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 05:48:02.160079 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 05:48:02.166298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 05:48:02.171824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 05:48:02.175725 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 05:48:02.181359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:02.182405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:48:02.187776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:48:02.191734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:48:02.196066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:48:02.197636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:48:02.197861 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:48:02.198037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:02.199865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:48:02.200562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:48:02.206689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 05:48:02.209895 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:48:02.210298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:48:02.213075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:48:02.213440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:48:02.220149 augenrules[1434]: No rules Sep 12 05:48:02.225318 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:48:02.225729 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:48:02.227586 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 05:48:02.232315 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:02.232634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:48:02.234174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:48:02.235549 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Sep 12 05:48:02.237050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:48:02.241732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:48:02.243112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:48:02.243258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:48:02.250226 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 05:48:02.253574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 05:48:02.255227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:02.257965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 05:48:02.260201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:48:02.260507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:48:02.264076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:48:02.264318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:48:02.266013 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 05:48:02.268017 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:48:02.268258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:48:02.270027 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 05:48:02.287235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:02.291856 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:48:02.293339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 05:48:02.296767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 05:48:02.302751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 05:48:02.310744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 05:48:02.313806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 05:48:02.315679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 05:48:02.315804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 05:48:02.318048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 05:48:02.327422 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 05:48:02.327585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 05:48:02.329997 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 05:48:02.330389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 05:48:02.334603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 05:48:02.334874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 05:48:02.349003 augenrules[1472]: /sbin/augenrules: No change Sep 12 05:48:02.351313 systemd[1]: Finished ensure-sysext.service. Sep 12 05:48:02.353110 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 05:48:02.356097 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 05:48:02.356327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 05:48:02.364339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 05:48:02.365640 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 05:48:02.372376 augenrules[1510]: No rules Sep 12 05:48:02.372752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 05:48:02.372864 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 05:48:02.377613 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 05:48:02.380613 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:48:02.380899 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:48:02.421965 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 05:48:02.498649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 05:48:02.502834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 05:48:02.524493 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 05:48:02.531658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 05:48:02.537273 systemd-networkd[1489]: lo: Link UP Sep 12 05:48:02.537603 systemd-networkd[1489]: lo: Gained carrier Sep 12 05:48:02.539564 systemd-networkd[1489]: Enumeration completed Sep 12 05:48:02.541886 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:48:02.541994 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 05:48:02.542616 systemd-networkd[1489]: eth0: Link UP Sep 12 05:48:02.543362 systemd-networkd[1489]: eth0: Gained carrier Sep 12 05:48:02.543432 systemd-networkd[1489]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 05:48:02.544438 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 05:48:02.552249 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 05:48:02.556052 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 05:48:02.560509 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 05:48:02.562552 systemd-networkd[1489]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 05:48:02.566488 kernel: ACPI: button: Power Button [PWRF] Sep 12 05:48:02.574727 systemd-resolved[1408]: Positive Trust Anchors: Sep 12 05:48:02.574741 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 05:48:02.574771 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 05:48:02.580388 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 12 05:48:02.582285 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 05:48:02.583711 systemd[1]: Reached target network.target - Network. Sep 12 05:48:02.584837 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 05:48:02.598547 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 05:48:02.604744 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 05:48:02.605123 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 05:48:02.607566 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 05:48:02.612772 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 05:48:03.575634 systemd-resolved[1408]: Clock change detected. Flushing caches. Sep 12 05:48:03.575739 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 05:48:03.575794 systemd-timesyncd[1520]: Initial clock synchronization to Fri 2025-09-12 05:48:03.575561 UTC. Sep 12 05:48:03.576900 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 05:48:03.578202 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 05:48:03.579652 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 05:48:03.581593 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 05:48:03.582897 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 05:48:03.584593 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 05:48:03.584623 systemd[1]: Reached target paths.target - Path Units. Sep 12 05:48:03.586588 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 05:48:03.587937 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 05:48:03.589230 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 05:48:03.591580 systemd[1]: Reached target timers.target - Timer Units. Sep 12 05:48:03.595106 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 05:48:03.599453 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 05:48:03.605367 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 05:48:03.608857 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 05:48:03.610183 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 05:48:03.627766 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 05:48:03.629703 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 05:48:03.633265 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 05:48:03.635417 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 05:48:03.637391 systemd[1]: Reached target basic.target - Basic System. Sep 12 05:48:03.638665 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:48:03.638704 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 05:48:03.641414 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 05:48:03.644784 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 05:48:03.653031 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 05:48:03.656070 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 05:48:03.660714 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 05:48:03.661950 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 05:48:03.670741 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 05:48:03.673756 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 05:48:03.676150 jq[1559]: false Sep 12 05:48:03.676654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 05:48:03.703144 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Sep 12 05:48:03.702756 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 05:48:03.701396 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Sep 12 05:48:03.707297 extend-filesystems[1560]: Found /dev/vda6 Sep 12 05:48:03.710950 extend-filesystems[1560]: Found /dev/vda9 Sep 12 05:48:03.712121 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 05:48:03.713081 extend-filesystems[1560]: Checking size of /dev/vda9 Sep 12 05:48:03.717008 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Sep 12 05:48:03.717008 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:48:03.717008 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Sep 12 05:48:03.716483 oslogin_cache_refresh[1562]: Failure getting users, quitting Sep 12 05:48:03.716505 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 05:48:03.716617 oslogin_cache_refresh[1562]: Refreshing group entry cache Sep 12 05:48:03.721458 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 05:48:03.727682 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 05:48:03.728416 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 05:48:03.729217 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 05:48:03.738886 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Sep 12 05:48:03.738886 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:48:03.738852 oslogin_cache_refresh[1562]: Failure getting groups, quitting Sep 12 05:48:03.738870 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 05:48:03.758175 extend-filesystems[1560]: Resized partition /dev/vda9 Sep 12 05:48:03.801791 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 05:48:03.816897 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 05:48:03.829590 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 05:48:03.832765 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 05:48:03.834714 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 05:48:03.835010 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 05:48:03.835348 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 05:48:03.835621 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 05:48:03.836104 jq[1587]: true Sep 12 05:48:03.868250 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 05:48:03.868657 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 05:48:03.871539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 05:48:03.871888 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 05:48:03.891887 jq[1591]: true Sep 12 05:48:03.903908 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 05:48:03.921867 update_engine[1581]: I20250912 05:48:03.921777 1581 main.cc:92] Flatcar Update Engine starting Sep 12 05:48:03.935675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 05:48:04.064983 kernel: kvm_amd: TSC scaling supported Sep 12 05:48:04.065063 kernel: kvm_amd: Nested Virtualization enabled Sep 12 05:48:04.120887 kernel: kvm_amd: Nested Paging enabled Sep 12 05:48:04.120931 kernel: kvm_amd: LBR virtualization supported Sep 12 05:48:04.121198 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 05:48:04.121220 kernel: kvm_amd: Virtual GIF supported Sep 12 05:48:04.121263 update_engine[1581]: I20250912 05:48:04.084834 1581 update_check_scheduler.cc:74] Next update check in 9m1s Sep 12 05:48:04.080842 dbus-daemon[1557]: [system] SELinux support is enabled Sep 12 05:48:04.079039 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 05:48:04.079081 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 05:48:04.081035 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 05:48:04.083681 systemd-logind[1577]: New seat seat0. Sep 12 05:48:04.123655 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 05:48:04.125509 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 05:48:04.126795 tar[1590]: linux-amd64/LICENSE Sep 12 05:48:04.127151 tar[1590]: linux-amd64/helm Sep 12 05:48:04.128945 systemd[1]: Started update-engine.service - Update Engine. Sep 12 05:48:04.178151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 05:48:04.178730 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 05:48:04.179546 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 05:48:04.180051 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 05:48:04.185583 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 05:48:04.436364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 05:48:04.471566 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 05:48:04.473538 kernel: EDAC MC: Ver: 3.0.0 Sep 12 05:48:04.496330 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 05:48:04.818112 containerd[1592]: time="2025-09-12T05:48:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 05:48:04.708828 systemd-networkd[1489]: eth0: Gained IPv6LL Sep 12 05:48:04.818858 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 05:48:04.818858 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 05:48:04.818858 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 05:48:04.823475 containerd[1592]: time="2025-09-12T05:48:04.818363725Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 05:48:04.712570 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 05:48:04.823862 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Sep 12 05:48:04.714480 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 05:48:04.717331 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 05:48:04.733209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:04.739714 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 05:48:04.766654 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 05:48:04.766959 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 05:48:04.772409 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 05:48:04.823346 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 05:48:04.824399 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 05:48:04.832555 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Sep 12 05:48:04.838161 sshd_keygen[1621]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 05:48:04.838639 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 05:48:04.847858 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 05:48:04.858234 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 05:48:04.871344 containerd[1592]: time="2025-09-12T05:48:04.871263811Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.915µs" Sep 12 05:48:04.871344 containerd[1592]: time="2025-09-12T05:48:04.871333672Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 05:48:04.871470 containerd[1592]: time="2025-09-12T05:48:04.871363579Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 05:48:04.871649 containerd[1592]: time="2025-09-12T05:48:04.871621502Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 05:48:04.871682 containerd[1592]: time="2025-09-12T05:48:04.871647120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 05:48:04.871703 containerd[1592]: time="2025-09-12T05:48:04.871686925Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:48:04.871800 containerd[1592]: time="2025-09-12T05:48:04.871764721Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 05:48:04.871800 containerd[1592]: time="2025-09-12T05:48:04.871786682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872155033Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872175661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872190349Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872201420Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872298822Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872815391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872867008Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872877918Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 05:48:04.873058 containerd[1592]: time="2025-09-12T05:48:04.872907394Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 05:48:04.873246 containerd[1592]: time="2025-09-12T05:48:04.873209300Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 05:48:04.873305 containerd[1592]: time="2025-09-12T05:48:04.873280734Z" level=info msg="metadata content store policy set" policy=shared Sep 12 05:48:04.884612 containerd[1592]: time="2025-09-12T05:48:04.884571398Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 05:48:04.884673 containerd[1592]: time="2025-09-12T05:48:04.884645988Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 05:48:04.884673 containerd[1592]: time="2025-09-12T05:48:04.884666616Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 05:48:04.884711 containerd[1592]: time="2025-09-12T05:48:04.884678569Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 05:48:04.884711 containerd[1592]: time="2025-09-12T05:48:04.884690141Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 05:48:04.884711 containerd[1592]: time="2025-09-12T05:48:04.884699779Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 05:48:04.884763 containerd[1592]: time="2025-09-12T05:48:04.884711962Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 05:48:04.884789 containerd[1592]: time="2025-09-12T05:48:04.884760623Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 05:48:04.884789 containerd[1592]: time="2025-09-12T05:48:04.884773797Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 05:48:04.884789 containerd[1592]: time="2025-09-12T05:48:04.884783526Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 05:48:04.884852 containerd[1592]: time="2025-09-12T05:48:04.884803854Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 05:48:04.884852 containerd[1592]: time="2025-09-12T05:48:04.884829081Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 05:48:04.885034 containerd[1592]: time="2025-09-12T05:48:04.885007756Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 05:48:04.885071 containerd[1592]: time="2025-09-12T05:48:04.885059724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 05:48:04.885092 containerd[1592]: time="2025-09-12T05:48:04.885077818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 05:48:04.885112 containerd[1592]: time="2025-09-12T05:48:04.885090171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 05:48:04.885132 containerd[1592]: time="2025-09-12T05:48:04.885125447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 05:48:04.885151 containerd[1592]: time="2025-09-12T05:48:04.885137950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 05:48:04.885151 containerd[1592]: time="2025-09-12T05:48:04.885148380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 05:48:04.885197 containerd[1592]: time="2025-09-12T05:48:04.885158349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 05:48:04.885197 containerd[1592]: time="2025-09-12T05:48:04.885168247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 05:48:04.885197 containerd[1592]: time="2025-09-12T05:48:04.885178937Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 05:48:04.885197 containerd[1592]: time="2025-09-12T05:48:04.885189788Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 05:48:04.885287 containerd[1592]: time="2025-09-12T05:48:04.885271200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 05:48:04.885314 containerd[1592]: time="2025-09-12T05:48:04.885287731Z" level=info msg="Start snapshots syncer" Sep 12 05:48:04.885334 containerd[1592]: time="2025-09-12T05:48:04.885312217Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 05:48:04.886126 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 05:48:04.887363 containerd[1592]: time="2025-09-12T05:48:04.886423912Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 05:48:04.887363 containerd[1592]: time="2025-09-12T05:48:04.886530742Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 05:48:04.887671 containerd[1592]: time="2025-09-12T05:48:04.887650432Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 05:48:04.887889 containerd[1592]: time="2025-09-12T05:48:04.887849025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 05:48:04.887889 containerd[1592]: time="2025-09-12T05:48:04.887883199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 05:48:04.887945 containerd[1592]: time="2025-09-12T05:48:04.887893949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 05:48:04.887945 containerd[1592]: time="2025-09-12T05:48:04.887907745Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 05:48:04.887986 containerd[1592]: time="2025-09-12T05:48:04.887950415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 05:48:04.887986 containerd[1592]: time="2025-09-12T05:48:04.887965894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 05:48:04.887986 containerd[1592]: time="2025-09-12T05:48:04.887976754Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 05:48:04.888065 containerd[1592]: time="2025-09-12T05:48:04.887998325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 05:48:04.888065 containerd[1592]: time="2025-09-12T05:48:04.888008484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 05:48:04.888065 containerd[1592]: time="2025-09-12T05:48:04.888030355Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 05:48:04.888125 containerd[1592]: time="2025-09-12T05:48:04.888079166Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:48:04.888125 containerd[1592]: time="2025-09-12T05:48:04.888096188Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 05:48:04.888125 containerd[1592]: time="2025-09-12T05:48:04.888104844Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:48:04.888194 containerd[1592]: time="2025-09-12T05:48:04.888113871Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 05:48:04.888194 containerd[1592]: time="2025-09-12T05:48:04.888187890Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 05:48:04.888240 containerd[1592]: time="2025-09-12T05:48:04.888198360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 05:48:04.888240 containerd[1592]: time="2025-09-12T05:48:04.888208338Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 05:48:04.888240 containerd[1592]: time="2025-09-12T05:48:04.888225591Z" level=info msg="runtime interface created" Sep 12 05:48:04.888240 containerd[1592]: time="2025-09-12T05:48:04.888230660Z" level=info msg="created NRI interface" Sep 12 05:48:04.888240 containerd[1592]: time="2025-09-12T05:48:04.888238565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 05:48:04.888326 containerd[1592]: time="2025-09-12T05:48:04.888251059Z" level=info msg="Connect containerd service" Sep 12 05:48:04.888326 containerd[1592]: time="2025-09-12T05:48:04.888275965Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 05:48:04.889911 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 05:48:04.892897 containerd[1592]: time="2025-09-12T05:48:04.892856966Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 05:48:04.909335 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 05:48:04.909645 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 05:48:04.913786 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 05:48:04.941054 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 05:48:04.944470 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 05:48:04.947483 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 05:48:04.948760 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 05:48:05.041645 tar[1590]: linux-amd64/README.md Sep 12 05:48:05.044969 containerd[1592]: time="2025-09-12T05:48:05.044911353Z" level=info msg="Start subscribing containerd event" Sep 12 05:48:05.045226 containerd[1592]: time="2025-09-12T05:48:05.045143217Z" level=info msg="Start recovering state" Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045176339Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045410128Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045424094Z" level=info msg="Start event monitor" Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045445274Z" level=info msg="Start cni network conf syncer for default" Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045456475Z" level=info msg="Start streaming server" Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045473166Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045483746Z" level=info msg="runtime interface starting up..." Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045489938Z" level=info msg="starting plugins..." Sep 12 05:48:05.045554 containerd[1592]: time="2025-09-12T05:48:05.045507320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 05:48:05.047367 containerd[1592]: time="2025-09-12T05:48:05.045842308Z" level=info msg="containerd successfully booted in 0.297902s" Sep 12 05:48:05.045934 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 05:48:05.072268 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 05:48:05.872949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 05:48:05.875602 systemd[1]: Started sshd@0-10.0.0.17:22-10.0.0.1:57526.service - OpenSSH per-connection server daemon (10.0.0.1:57526). Sep 12 05:48:05.976248 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 57526 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:05.978179 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:05.985186 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 05:48:05.987558 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 05:48:05.995328 systemd-logind[1577]: New session 1 of user core. Sep 12 05:48:06.014288 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 05:48:06.020143 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 05:48:06.063976 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 05:48:06.066942 systemd-logind[1577]: New session c1 of user core. Sep 12 05:48:06.166613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:06.168721 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 05:48:06.186150 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:48:06.255345 systemd[1701]: Queued start job for default target default.target. Sep 12 05:48:06.268836 systemd[1701]: Created slice app.slice - User Application Slice. Sep 12 05:48:06.268862 systemd[1701]: Reached target paths.target - Paths. Sep 12 05:48:06.268902 systemd[1701]: Reached target timers.target - Timers. Sep 12 05:48:06.270483 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 05:48:06.282165 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 05:48:06.282296 systemd[1701]: Reached target sockets.target - Sockets. Sep 12 05:48:06.282338 systemd[1701]: Reached target basic.target - Basic System. Sep 12 05:48:06.282378 systemd[1701]: Reached target default.target - Main User Target. Sep 12 05:48:06.282410 systemd[1701]: Startup finished in 206ms. Sep 12 05:48:06.282824 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 05:48:06.285352 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 05:48:06.286571 systemd[1]: Startup finished in 3.474s (kernel) + 8.168s (initrd) + 5.571s (userspace) = 17.215s. Sep 12 05:48:06.357698 systemd[1]: Started sshd@1-10.0.0.17:22-10.0.0.1:57536.service - OpenSSH per-connection server daemon (10.0.0.1:57536). Sep 12 05:48:06.449513 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 57536 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:06.451457 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:06.456129 systemd-logind[1577]: New session 2 of user core. Sep 12 05:48:06.463677 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 05:48:06.542702 sshd[1730]: Connection closed by 10.0.0.1 port 57536 Sep 12 05:48:06.543423 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:06.556422 systemd[1]: sshd@1-10.0.0.17:22-10.0.0.1:57536.service: Deactivated successfully. Sep 12 05:48:06.558661 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 05:48:06.559579 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Sep 12 05:48:06.563158 systemd[1]: Started sshd@2-10.0.0.17:22-10.0.0.1:57548.service - OpenSSH per-connection server daemon (10.0.0.1:57548). Sep 12 05:48:06.563921 systemd-logind[1577]: Removed session 2. Sep 12 05:48:06.624395 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 57548 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:06.628048 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:06.633182 systemd-logind[1577]: New session 3 of user core. Sep 12 05:48:06.637674 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 05:48:06.702181 sshd[1740]: Connection closed by 10.0.0.1 port 57548 Sep 12 05:48:06.702984 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:06.716980 systemd[1]: sshd@2-10.0.0.17:22-10.0.0.1:57548.service: Deactivated successfully. Sep 12 05:48:06.718769 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 05:48:06.719471 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Sep 12 05:48:06.722228 systemd[1]: Started sshd@3-10.0.0.17:22-10.0.0.1:57558.service - OpenSSH per-connection server daemon (10.0.0.1:57558). Sep 12 05:48:06.722991 systemd-logind[1577]: Removed session 3. Sep 12 05:48:06.783055 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 57558 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:06.784800 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:06.789350 systemd-logind[1577]: New session 4 of user core. Sep 12 05:48:06.796776 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 05:48:06.807380 kubelet[1712]: E0912 05:48:06.807331 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:48:06.811726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:48:06.811937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:48:06.812333 systemd[1]: kubelet.service: Consumed 1.639s CPU time, 268.5M memory peak. Sep 12 05:48:06.862553 sshd[1749]: Connection closed by 10.0.0.1 port 57558 Sep 12 05:48:06.862927 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:06.871149 systemd[1]: sshd@3-10.0.0.17:22-10.0.0.1:57558.service: Deactivated successfully. Sep 12 05:48:06.873070 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 05:48:06.873840 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Sep 12 05:48:06.876983 systemd[1]: Started sshd@4-10.0.0.17:22-10.0.0.1:57562.service - OpenSSH per-connection server daemon (10.0.0.1:57562). Sep 12 05:48:06.877589 systemd-logind[1577]: Removed session 4. Sep 12 05:48:06.933175 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 57562 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:06.934425 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:06.938724 systemd-logind[1577]: New session 5 of user core. Sep 12 05:48:06.948717 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 05:48:07.007416 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 05:48:07.007763 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:07.025245 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:07.026810 sshd[1760]: Connection closed by 10.0.0.1 port 57562 Sep 12 05:48:07.027251 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:07.040442 systemd[1]: sshd@4-10.0.0.17:22-10.0.0.1:57562.service: Deactivated successfully. Sep 12 05:48:07.042468 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 05:48:07.043325 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Sep 12 05:48:07.046987 systemd[1]: Started sshd@5-10.0.0.17:22-10.0.0.1:57570.service - OpenSSH per-connection server daemon (10.0.0.1:57570). Sep 12 05:48:07.047509 systemd-logind[1577]: Removed session 5. Sep 12 05:48:07.104094 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 57570 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:07.105490 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:07.109901 systemd-logind[1577]: New session 6 of user core. Sep 12 05:48:07.122652 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 05:48:07.175488 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 05:48:07.175862 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:07.233509 sudo[1773]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:07.239860 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 05:48:07.240164 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:07.250352 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 05:48:07.340986 augenrules[1795]: No rules Sep 12 05:48:07.342853 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 05:48:07.343153 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 05:48:07.344617 sudo[1772]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:07.346250 sshd[1771]: Connection closed by 10.0.0.1 port 57570 Sep 12 05:48:07.346686 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:07.358152 systemd[1]: sshd@5-10.0.0.17:22-10.0.0.1:57570.service: Deactivated successfully. Sep 12 05:48:07.360232 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 05:48:07.361096 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Sep 12 05:48:07.363993 systemd[1]: Started sshd@6-10.0.0.17:22-10.0.0.1:57580.service - OpenSSH per-connection server daemon (10.0.0.1:57580). Sep 12 05:48:07.364851 systemd-logind[1577]: Removed session 6. Sep 12 05:48:07.421136 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 57580 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:48:07.422717 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:48:07.427290 systemd-logind[1577]: New session 7 of user core. Sep 12 05:48:07.437662 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 05:48:07.491999 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 05:48:07.492310 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 05:48:08.136034 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 05:48:08.153944 (dockerd)[1829]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 05:48:08.528180 dockerd[1829]: time="2025-09-12T05:48:08.528088864Z" level=info msg="Starting up" Sep 12 05:48:08.529174 dockerd[1829]: time="2025-09-12T05:48:08.529131249Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 05:48:08.544918 dockerd[1829]: time="2025-09-12T05:48:08.544857170Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 05:48:09.019076 dockerd[1829]: time="2025-09-12T05:48:09.019001139Z" level=info msg="Loading containers: start." Sep 12 05:48:09.031607 kernel: Initializing XFRM netlink socket Sep 12 05:48:09.329141 systemd-networkd[1489]: docker0: Link UP Sep 12 05:48:09.336202 dockerd[1829]: time="2025-09-12T05:48:09.336151850Z" level=info msg="Loading containers: done." Sep 12 05:48:09.352796 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2932786024-merged.mount: Deactivated successfully. Sep 12 05:48:09.354801 dockerd[1829]: time="2025-09-12T05:48:09.354742665Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 05:48:09.354875 dockerd[1829]: time="2025-09-12T05:48:09.354849876Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 05:48:09.354966 dockerd[1829]: time="2025-09-12T05:48:09.354945725Z" level=info msg="Initializing buildkit" Sep 12 05:48:09.387078 dockerd[1829]: time="2025-09-12T05:48:09.387028172Z" level=info msg="Completed buildkit initialization" Sep 12 05:48:09.394166 dockerd[1829]: time="2025-09-12T05:48:09.394090656Z" level=info msg="Daemon has completed initialization" Sep 12 05:48:09.394340 dockerd[1829]: time="2025-09-12T05:48:09.394196545Z" level=info msg="API listen on /run/docker.sock" Sep 12 05:48:09.394495 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 05:48:10.477713 containerd[1592]: time="2025-09-12T05:48:10.477604511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 05:48:11.416544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290458156.mount: Deactivated successfully. Sep 12 05:48:12.873965 containerd[1592]: time="2025-09-12T05:48:12.873910070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:12.874846 containerd[1592]: time="2025-09-12T05:48:12.874817812Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 05:48:12.876238 containerd[1592]: time="2025-09-12T05:48:12.876186158Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:12.880994 containerd[1592]: time="2025-09-12T05:48:12.880964780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:12.882087 containerd[1592]: time="2025-09-12T05:48:12.882013917Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.404336008s" Sep 12 05:48:12.882134 containerd[1592]: time="2025-09-12T05:48:12.882093527Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 05:48:12.883085 containerd[1592]: time="2025-09-12T05:48:12.883041735Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 05:48:14.540721 containerd[1592]: time="2025-09-12T05:48:14.540650934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:14.541446 containerd[1592]: time="2025-09-12T05:48:14.541418924Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 05:48:14.542661 containerd[1592]: time="2025-09-12T05:48:14.542606471Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:14.545166 containerd[1592]: time="2025-09-12T05:48:14.545084689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:14.546150 containerd[1592]: time="2025-09-12T05:48:14.546094553Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.663021689s" Sep 12 05:48:14.546150 containerd[1592]: time="2025-09-12T05:48:14.546134848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 05:48:14.546835 containerd[1592]: time="2025-09-12T05:48:14.546774948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 05:48:16.147787 containerd[1592]: time="2025-09-12T05:48:16.147708911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:16.148485 containerd[1592]: time="2025-09-12T05:48:16.148448688Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 05:48:16.149654 containerd[1592]: time="2025-09-12T05:48:16.149623662Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:16.152114 containerd[1592]: time="2025-09-12T05:48:16.152052206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:16.153066 containerd[1592]: time="2025-09-12T05:48:16.153018559Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.606210047s" Sep 12 05:48:16.153066 containerd[1592]: time="2025-09-12T05:48:16.153065196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 05:48:16.153782 containerd[1592]: time="2025-09-12T05:48:16.153561497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 05:48:16.964398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 05:48:16.966467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:17.560497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:17.571861 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 05:48:17.651312 kubelet[2125]: E0912 05:48:17.651218 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 05:48:17.658650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 05:48:17.658950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 05:48:17.659460 systemd[1]: kubelet.service: Consumed 337ms CPU time, 108.7M memory peak. Sep 12 05:48:18.434258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008255009.mount: Deactivated successfully. Sep 12 05:48:19.046233 containerd[1592]: time="2025-09-12T05:48:19.046138952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:19.046953 containerd[1592]: time="2025-09-12T05:48:19.046893257Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 05:48:19.048278 containerd[1592]: time="2025-09-12T05:48:19.048216679Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:19.050119 containerd[1592]: time="2025-09-12T05:48:19.050061508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:19.050696 containerd[1592]: time="2025-09-12T05:48:19.050651795Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.897057347s" Sep 12 05:48:19.050696 containerd[1592]: time="2025-09-12T05:48:19.050693423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 05:48:19.051557 containerd[1592]: time="2025-09-12T05:48:19.051456334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 05:48:19.661690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323255570.mount: Deactivated successfully. Sep 12 05:48:21.074395 containerd[1592]: time="2025-09-12T05:48:21.074276514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:21.075195 containerd[1592]: time="2025-09-12T05:48:21.075134994Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 05:48:21.076356 containerd[1592]: time="2025-09-12T05:48:21.076278148Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:21.078974 containerd[1592]: time="2025-09-12T05:48:21.078926464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:21.080157 containerd[1592]: time="2025-09-12T05:48:21.080118279Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.02863263s" Sep 12 05:48:21.080157 containerd[1592]: time="2025-09-12T05:48:21.080156812Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 05:48:21.080696 containerd[1592]: time="2025-09-12T05:48:21.080669874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 05:48:21.505212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140965303.mount: Deactivated successfully. Sep 12 05:48:21.509636 containerd[1592]: time="2025-09-12T05:48:21.509587658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:48:21.510335 containerd[1592]: time="2025-09-12T05:48:21.510257734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 05:48:21.511503 containerd[1592]: time="2025-09-12T05:48:21.511454178Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:48:21.513606 containerd[1592]: time="2025-09-12T05:48:21.513559196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 05:48:21.514223 containerd[1592]: time="2025-09-12T05:48:21.514178818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 433.48022ms" Sep 12 05:48:21.514223 containerd[1592]: time="2025-09-12T05:48:21.514218913Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 05:48:21.514879 containerd[1592]: time="2025-09-12T05:48:21.514840859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 05:48:22.121122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618586149.mount: Deactivated successfully. Sep 12 05:48:23.698756 containerd[1592]: time="2025-09-12T05:48:23.698680628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:23.699433 containerd[1592]: time="2025-09-12T05:48:23.699377946Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 05:48:23.700683 containerd[1592]: time="2025-09-12T05:48:23.700652977Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:23.703414 containerd[1592]: time="2025-09-12T05:48:23.703354213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:23.704865 containerd[1592]: time="2025-09-12T05:48:23.704816846Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.189937885s" Sep 12 05:48:23.704931 containerd[1592]: time="2025-09-12T05:48:23.704865708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 05:48:27.023633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:27.023799 systemd[1]: kubelet.service: Consumed 337ms CPU time, 108.7M memory peak. Sep 12 05:48:27.026188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:27.054603 systemd[1]: Reload requested from client PID 2283 ('systemctl') (unit session-7.scope)... Sep 12 05:48:27.054624 systemd[1]: Reloading... Sep 12 05:48:27.121551 zram_generator::config[2326]: No configuration found. Sep 12 05:48:27.430176 systemd[1]: Reloading finished in 375 ms. Sep 12 05:48:27.502759 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 05:48:27.502862 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 05:48:27.503178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:27.503222 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.3M memory peak. Sep 12 05:48:27.504819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:27.696577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:27.701781 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:48:27.748556 kubelet[2373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:27.748556 kubelet[2373]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 05:48:27.748556 kubelet[2373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:27.748556 kubelet[2373]: I0912 05:48:27.748288 2373 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:48:28.251104 kubelet[2373]: I0912 05:48:28.251064 2373 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 05:48:28.251104 kubelet[2373]: I0912 05:48:28.251091 2373 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:48:28.251333 kubelet[2373]: I0912 05:48:28.251318 2373 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 05:48:28.280916 kubelet[2373]: I0912 05:48:28.280867 2373 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:48:28.281170 kubelet[2373]: E0912 05:48:28.281118 2373 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 05:48:28.286952 kubelet[2373]: I0912 05:48:28.286912 2373 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:48:28.293109 kubelet[2373]: I0912 05:48:28.293055 2373 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:48:28.293368 kubelet[2373]: I0912 05:48:28.293328 2373 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:48:28.293540 kubelet[2373]: I0912 05:48:28.293359 2373 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:48:28.293540 kubelet[2373]: I0912 05:48:28.293531 2373 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:48:28.293540 kubelet[2373]: I0912 05:48:28.293541 2373 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 05:48:28.293733 kubelet[2373]: I0912 05:48:28.293716 2373 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:28.296023 kubelet[2373]: I0912 05:48:28.295989 2373 kubelet.go:480] "Attempting to sync node with API server" Sep 12 05:48:28.296023 kubelet[2373]: I0912 05:48:28.296012 2373 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:48:28.296093 kubelet[2373]: I0912 05:48:28.296043 2373 kubelet.go:386] "Adding apiserver pod source" Sep 12 05:48:28.296093 kubelet[2373]: I0912 05:48:28.296063 2373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:48:28.302944 kubelet[2373]: I0912 05:48:28.302905 2373 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:48:28.303273 kubelet[2373]: E0912 05:48:28.303248 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 05:48:28.303500 kubelet[2373]: E0912 05:48:28.303482 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 05:48:28.303674 kubelet[2373]: I0912 05:48:28.303648 2373 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 05:48:28.304896 kubelet[2373]: W0912 05:48:28.304643 2373 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 05:48:28.308449 kubelet[2373]: I0912 05:48:28.308414 2373 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 05:48:28.308502 kubelet[2373]: I0912 05:48:28.308488 2373 server.go:1289] "Started kubelet" Sep 12 05:48:28.309300 kubelet[2373]: I0912 05:48:28.308990 2373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:48:28.311963 kubelet[2373]: I0912 05:48:28.311365 2373 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:48:28.311963 kubelet[2373]: I0912 05:48:28.311445 2373 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:48:28.311963 kubelet[2373]: I0912 05:48:28.311580 2373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:48:28.312866 kubelet[2373]: I0912 05:48:28.312838 2373 server.go:317] "Adding debug handlers to kubelet server" Sep 12 05:48:28.313755 kubelet[2373]: I0912 05:48:28.313733 2373 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:48:28.314120 kubelet[2373]: E0912 05:48:28.313927 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:28.314120 kubelet[2373]: I0912 05:48:28.314000 2373 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 05:48:28.314849 kubelet[2373]: E0912 05:48:28.314815 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="200ms" Sep 12 05:48:28.315121 kubelet[2373]: I0912 05:48:28.315095 2373 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 05:48:28.315462 kubelet[2373]: E0912 05:48:28.315435 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 05:48:28.315619 kubelet[2373]: E0912 05:48:28.315597 2373 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:48:28.316001 kubelet[2373]: I0912 05:48:28.315978 2373 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:48:28.316618 kubelet[2373]: I0912 05:48:28.316594 2373 factory.go:223] Registration of the systemd container factory successfully Sep 12 05:48:28.316717 kubelet[2373]: I0912 05:48:28.316695 2373 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:48:28.318004 kubelet[2373]: I0912 05:48:28.317982 2373 factory.go:223] Registration of the containerd container factory successfully Sep 12 05:48:28.320105 kubelet[2373]: E0912 05:48:28.314289 2373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186472eb05052caf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:48:28.308442287 +0000 UTC m=+0.601794551,LastTimestamp:2025-09-12 05:48:28.308442287 +0000 UTC m=+0.601794551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:48:28.335032 kubelet[2373]: I0912 05:48:28.334975 2373 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 05:48:28.335032 kubelet[2373]: I0912 05:48:28.335017 2373 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 05:48:28.335032 kubelet[2373]: I0912 05:48:28.335036 2373 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:28.405759 kubelet[2373]: I0912 05:48:28.405700 2373 policy_none.go:49] "None policy: Start" Sep 12 05:48:28.405759 kubelet[2373]: I0912 05:48:28.405762 2373 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 05:48:28.405950 kubelet[2373]: I0912 05:48:28.405782 2373 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:48:28.409426 kubelet[2373]: I0912 05:48:28.409390 2373 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 05:48:28.410807 kubelet[2373]: I0912 05:48:28.410758 2373 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 05:48:28.410807 kubelet[2373]: I0912 05:48:28.410784 2373 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 05:48:28.410807 kubelet[2373]: I0912 05:48:28.410814 2373 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 05:48:28.411009 kubelet[2373]: I0912 05:48:28.410824 2373 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 05:48:28.411009 kubelet[2373]: E0912 05:48:28.410867 2373 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:48:28.412144 kubelet[2373]: E0912 05:48:28.412080 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 05:48:28.414045 kubelet[2373]: E0912 05:48:28.414021 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 05:48:28.417792 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 05:48:28.432388 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 05:48:28.436202 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 05:48:28.446724 kubelet[2373]: E0912 05:48:28.446684 2373 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 05:48:28.447304 kubelet[2373]: I0912 05:48:28.447272 2373 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:48:28.447354 kubelet[2373]: I0912 05:48:28.447310 2373 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:48:28.447728 kubelet[2373]: I0912 05:48:28.447702 2373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:48:28.451419 kubelet[2373]: E0912 05:48:28.451380 2373 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 05:48:28.451511 kubelet[2373]: E0912 05:48:28.451444 2373 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 05:48:28.516467 kubelet[2373]: E0912 05:48:28.516324 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="400ms" Sep 12 05:48:28.517559 kubelet[2373]: I0912 05:48:28.516953 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:28.525416 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 05:48:28.547109 kubelet[2373]: E0912 05:48:28.547048 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:28.548783 kubelet[2373]: I0912 05:48:28.548756 2373 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:28.549228 kubelet[2373]: E0912 05:48:28.549203 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Sep 12 05:48:28.550547 systemd[1]: Created slice kubepods-burstable-pod18bcd3cc5f3da8cb1b2fdbe664cc527e.slice - libcontainer container kubepods-burstable-pod18bcd3cc5f3da8cb1b2fdbe664cc527e.slice. Sep 12 05:48:28.562235 kubelet[2373]: E0912 05:48:28.562192 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:28.565444 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 05:48:28.567749 kubelet[2373]: E0912 05:48:28.567721 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:28.617921 kubelet[2373]: I0912 05:48:28.617856 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18bcd3cc5f3da8cb1b2fdbe664cc527e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18bcd3cc5f3da8cb1b2fdbe664cc527e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:28.618184 kubelet[2373]: I0912 05:48:28.617960 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18bcd3cc5f3da8cb1b2fdbe664cc527e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18bcd3cc5f3da8cb1b2fdbe664cc527e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:28.618184 kubelet[2373]: I0912 05:48:28.617986 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18bcd3cc5f3da8cb1b2fdbe664cc527e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18bcd3cc5f3da8cb1b2fdbe664cc527e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:28.618184 kubelet[2373]: I0912 05:48:28.618010 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:28.618184 kubelet[2373]: I0912 05:48:28.618029 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:28.618184 kubelet[2373]: I0912 05:48:28.618047 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:28.618346 kubelet[2373]: I0912 05:48:28.618064 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:28.618346 kubelet[2373]: I0912 05:48:28.618100 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:28.751795 kubelet[2373]: I0912 05:48:28.751733 2373 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:28.752315 kubelet[2373]: E0912 05:48:28.752226 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Sep 12 05:48:28.848342 kubelet[2373]: E0912 05:48:28.848196 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:28.849201 containerd[1592]: time="2025-09-12T05:48:28.849124229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 05:48:28.863409 kubelet[2373]: E0912 05:48:28.863373 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:28.864022 containerd[1592]: time="2025-09-12T05:48:28.863978355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18bcd3cc5f3da8cb1b2fdbe664cc527e,Namespace:kube-system,Attempt:0,}" Sep 12 05:48:28.868256 kubelet[2373]: E0912 05:48:28.868229 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:28.868732 containerd[1592]: time="2025-09-12T05:48:28.868681526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 05:48:28.917598 kubelet[2373]: E0912 05:48:28.917553 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="800ms" Sep 12 05:48:29.124328 kubelet[2373]: E0912 05:48:29.124191 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 05:48:29.154314 kubelet[2373]: I0912 05:48:29.154265 2373 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:29.154715 kubelet[2373]: E0912 05:48:29.154676 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Sep 12 05:48:29.241785 containerd[1592]: time="2025-09-12T05:48:29.241704787Z" level=info msg="connecting to shim 6eb06398e9f8e611d862457cb664e0d802c5215a8b80424cc9da19e45111024d" address="unix:///run/containerd/s/9aa7a85998a94361e3d9a06dbbd21b70ad7ac3538c5e908e38d42331cf121c67" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:29.251126 containerd[1592]: time="2025-09-12T05:48:29.250821655Z" level=info msg="connecting to shim ae9a9b553935d9c535a07159ba56ed276bac9eb5e2f80f38d3712fcc435defad" address="unix:///run/containerd/s/c5ab003bf4be8865925ed470f0349297d797accd1f30843348f4d4b2284f7a28" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:29.261088 kubelet[2373]: E0912 05:48:29.261027 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 05:48:29.264255 containerd[1592]: time="2025-09-12T05:48:29.264207107Z" level=info msg="connecting to shim 74ffb31a60f890359406ea5965bec2cce18c5e4fd2235a9118b2615fc9d1eee2" address="unix:///run/containerd/s/2779626f865faabf8557259b7c40a70e012a34fdcd45c1162b8c148f27658338" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:29.292662 systemd[1]: Started cri-containerd-6eb06398e9f8e611d862457cb664e0d802c5215a8b80424cc9da19e45111024d.scope - libcontainer container 6eb06398e9f8e611d862457cb664e0d802c5215a8b80424cc9da19e45111024d. Sep 12 05:48:29.297120 systemd[1]: Started cri-containerd-ae9a9b553935d9c535a07159ba56ed276bac9eb5e2f80f38d3712fcc435defad.scope - libcontainer container ae9a9b553935d9c535a07159ba56ed276bac9eb5e2f80f38d3712fcc435defad. Sep 12 05:48:29.314675 systemd[1]: Started cri-containerd-74ffb31a60f890359406ea5965bec2cce18c5e4fd2235a9118b2615fc9d1eee2.scope - libcontainer container 74ffb31a60f890359406ea5965bec2cce18c5e4fd2235a9118b2615fc9d1eee2. Sep 12 05:48:29.315365 kubelet[2373]: E0912 05:48:29.315321 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 05:48:29.390402 containerd[1592]: time="2025-09-12T05:48:29.389160900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae9a9b553935d9c535a07159ba56ed276bac9eb5e2f80f38d3712fcc435defad\"" Sep 12 05:48:29.390781 kubelet[2373]: E0912 05:48:29.390741 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:29.449156 containerd[1592]: time="2025-09-12T05:48:29.449068639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"74ffb31a60f890359406ea5965bec2cce18c5e4fd2235a9118b2615fc9d1eee2\"" Sep 12 05:48:29.449834 kubelet[2373]: E0912 05:48:29.449775 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:29.457558 containerd[1592]: time="2025-09-12T05:48:29.457461318Z" level=info msg="CreateContainer within sandbox \"ae9a9b553935d9c535a07159ba56ed276bac9eb5e2f80f38d3712fcc435defad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 05:48:29.536784 containerd[1592]: time="2025-09-12T05:48:29.536724725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18bcd3cc5f3da8cb1b2fdbe664cc527e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb06398e9f8e611d862457cb664e0d802c5215a8b80424cc9da19e45111024d\"" Sep 12 05:48:29.537682 kubelet[2373]: E0912 05:48:29.537650 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:29.600255 containerd[1592]: time="2025-09-12T05:48:29.600196177Z" level=info msg="CreateContainer within sandbox \"74ffb31a60f890359406ea5965bec2cce18c5e4fd2235a9118b2615fc9d1eee2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 05:48:29.658317 containerd[1592]: time="2025-09-12T05:48:29.657945738Z" level=info msg="CreateContainer within sandbox \"6eb06398e9f8e611d862457cb664e0d802c5215a8b80424cc9da19e45111024d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 05:48:29.711199 containerd[1592]: time="2025-09-12T05:48:29.711127112Z" level=info msg="Container 6851e39db2c2b626e1191e0a6a3059b9d7ec6e2e9f1704b8af41830674a8d7c8: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:29.716801 containerd[1592]: time="2025-09-12T05:48:29.716737052Z" level=info msg="Container e0c362553c7a61121d14c76a5b7a6445f9b949e69bd6bf43f856bb4e4f978e80: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:29.719112 kubelet[2373]: E0912 05:48:29.719048 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="1.6s" Sep 12 05:48:29.725351 containerd[1592]: time="2025-09-12T05:48:29.725291675Z" level=info msg="CreateContainer within sandbox \"ae9a9b553935d9c535a07159ba56ed276bac9eb5e2f80f38d3712fcc435defad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6851e39db2c2b626e1191e0a6a3059b9d7ec6e2e9f1704b8af41830674a8d7c8\"" Sep 12 05:48:29.726257 containerd[1592]: time="2025-09-12T05:48:29.726209396Z" level=info msg="StartContainer for \"6851e39db2c2b626e1191e0a6a3059b9d7ec6e2e9f1704b8af41830674a8d7c8\"" Sep 12 05:48:29.727948 containerd[1592]: time="2025-09-12T05:48:29.727908232Z" level=info msg="connecting to shim 6851e39db2c2b626e1191e0a6a3059b9d7ec6e2e9f1704b8af41830674a8d7c8" address="unix:///run/containerd/s/c5ab003bf4be8865925ed470f0349297d797accd1f30843348f4d4b2284f7a28" protocol=ttrpc version=3 Sep 12 05:48:29.729159 containerd[1592]: time="2025-09-12T05:48:29.729128381Z" level=info msg="Container 237597a7fd654c143cc6a719282410006f40e6705324f1569db4d6528cf9287d: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:29.741199 containerd[1592]: time="2025-09-12T05:48:29.741128315Z" level=info msg="CreateContainer within sandbox \"74ffb31a60f890359406ea5965bec2cce18c5e4fd2235a9118b2615fc9d1eee2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e0c362553c7a61121d14c76a5b7a6445f9b949e69bd6bf43f856bb4e4f978e80\"" Sep 12 05:48:29.741780 containerd[1592]: time="2025-09-12T05:48:29.741740673Z" level=info msg="StartContainer for \"e0c362553c7a61121d14c76a5b7a6445f9b949e69bd6bf43f856bb4e4f978e80\"" Sep 12 05:48:29.742568 containerd[1592]: time="2025-09-12T05:48:29.742537698Z" level=info msg="CreateContainer within sandbox \"6eb06398e9f8e611d862457cb664e0d802c5215a8b80424cc9da19e45111024d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"237597a7fd654c143cc6a719282410006f40e6705324f1569db4d6528cf9287d\"" Sep 12 05:48:29.743330 containerd[1592]: time="2025-09-12T05:48:29.742982081Z" level=info msg="StartContainer for \"237597a7fd654c143cc6a719282410006f40e6705324f1569db4d6528cf9287d\"" Sep 12 05:48:29.743330 containerd[1592]: time="2025-09-12T05:48:29.743079925Z" level=info msg="connecting to shim e0c362553c7a61121d14c76a5b7a6445f9b949e69bd6bf43f856bb4e4f978e80" address="unix:///run/containerd/s/2779626f865faabf8557259b7c40a70e012a34fdcd45c1162b8c148f27658338" protocol=ttrpc version=3 Sep 12 05:48:29.744340 containerd[1592]: time="2025-09-12T05:48:29.744300113Z" level=info msg="connecting to shim 237597a7fd654c143cc6a719282410006f40e6705324f1569db4d6528cf9287d" address="unix:///run/containerd/s/9aa7a85998a94361e3d9a06dbbd21b70ad7ac3538c5e908e38d42331cf121c67" protocol=ttrpc version=3 Sep 12 05:48:29.755959 systemd[1]: Started cri-containerd-6851e39db2c2b626e1191e0a6a3059b9d7ec6e2e9f1704b8af41830674a8d7c8.scope - libcontainer container 6851e39db2c2b626e1191e0a6a3059b9d7ec6e2e9f1704b8af41830674a8d7c8. Sep 12 05:48:29.767705 systemd[1]: Started cri-containerd-e0c362553c7a61121d14c76a5b7a6445f9b949e69bd6bf43f856bb4e4f978e80.scope - libcontainer container e0c362553c7a61121d14c76a5b7a6445f9b949e69bd6bf43f856bb4e4f978e80. Sep 12 05:48:29.771023 systemd[1]: Started cri-containerd-237597a7fd654c143cc6a719282410006f40e6705324f1569db4d6528cf9287d.scope - libcontainer container 237597a7fd654c143cc6a719282410006f40e6705324f1569db4d6528cf9287d. Sep 12 05:48:29.789579 kubelet[2373]: E0912 05:48:29.789503 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 05:48:29.879563 containerd[1592]: time="2025-09-12T05:48:29.876575196Z" level=info msg="StartContainer for \"237597a7fd654c143cc6a719282410006f40e6705324f1569db4d6528cf9287d\" returns successfully" Sep 12 05:48:29.880119 containerd[1592]: time="2025-09-12T05:48:29.879690919Z" level=info msg="StartContainer for \"6851e39db2c2b626e1191e0a6a3059b9d7ec6e2e9f1704b8af41830674a8d7c8\" returns successfully" Sep 12 05:48:29.899021 containerd[1592]: time="2025-09-12T05:48:29.898971948Z" level=info msg="StartContainer for \"e0c362553c7a61121d14c76a5b7a6445f9b949e69bd6bf43f856bb4e4f978e80\" returns successfully" Sep 12 05:48:29.957905 kubelet[2373]: I0912 05:48:29.957765 2373 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:30.425011 kubelet[2373]: E0912 05:48:30.424965 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:30.425189 kubelet[2373]: E0912 05:48:30.425131 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:30.436847 kubelet[2373]: E0912 05:48:30.436530 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:30.436847 kubelet[2373]: E0912 05:48:30.436711 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:30.437474 kubelet[2373]: E0912 05:48:30.437455 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:30.437883 kubelet[2373]: E0912 05:48:30.437866 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:31.303181 kubelet[2373]: I0912 05:48:31.302428 2373 apiserver.go:52] "Watching apiserver" Sep 12 05:48:31.315307 kubelet[2373]: I0912 05:48:31.315287 2373 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 05:48:31.334260 kubelet[2373]: E0912 05:48:31.334202 2373 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 05:48:31.438707 kubelet[2373]: E0912 05:48:31.438663 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:31.438862 kubelet[2373]: E0912 05:48:31.438788 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:31.438862 kubelet[2373]: E0912 05:48:31.438798 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 05:48:31.438977 kubelet[2373]: E0912 05:48:31.438943 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:31.552061 kubelet[2373]: I0912 05:48:31.552003 2373 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 05:48:31.552061 kubelet[2373]: E0912 05:48:31.552051 2373 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 05:48:31.559485 kubelet[2373]: E0912 05:48:31.559325 2373 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186472eb05052caf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:48:28.308442287 +0000 UTC m=+0.601794551,LastTimestamp:2025-09-12 05:48:28.308442287 +0000 UTC m=+0.601794551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:48:31.615372 kubelet[2373]: I0912 05:48:31.615297 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:31.796193 kubelet[2373]: E0912 05:48:31.796068 2373 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186472eb05722db6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 05:48:28.315585974 +0000 UTC m=+0.608938238,LastTimestamp:2025-09-12 05:48:28.315585974 +0000 UTC m=+0.608938238,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 05:48:31.798554 kubelet[2373]: E0912 05:48:31.796767 2373 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:31.798554 kubelet[2373]: I0912 05:48:31.796804 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:31.801241 kubelet[2373]: E0912 05:48:31.801211 2373 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:31.801241 kubelet[2373]: I0912 05:48:31.801238 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:31.804044 kubelet[2373]: E0912 05:48:31.803999 2373 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:32.439056 kubelet[2373]: I0912 05:48:32.439021 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:32.439694 kubelet[2373]: I0912 05:48:32.439079 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:32.586750 kubelet[2373]: E0912 05:48:32.586700 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:32.586955 kubelet[2373]: E0912 05:48:32.586756 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:33.441947 kubelet[2373]: E0912 05:48:33.441899 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:33.442405 kubelet[2373]: E0912 05:48:33.442056 2373 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:33.652048 systemd[1]: Reload requested from client PID 2661 ('systemctl') (unit session-7.scope)... Sep 12 05:48:33.652065 systemd[1]: Reloading... Sep 12 05:48:33.787575 zram_generator::config[2707]: No configuration found. Sep 12 05:48:34.080911 systemd[1]: Reloading finished in 428 ms. Sep 12 05:48:34.114284 kubelet[2373]: I0912 05:48:34.114162 2373 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:48:34.114316 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:34.138153 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 05:48:34.138510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:34.138615 systemd[1]: kubelet.service: Consumed 1.235s CPU time, 130.8M memory peak. Sep 12 05:48:34.140760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 05:48:34.390756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 05:48:34.404090 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 05:48:34.455376 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:34.455376 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 05:48:34.455376 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 05:48:34.455828 kubelet[2749]: I0912 05:48:34.455418 2749 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 05:48:34.462820 kubelet[2749]: I0912 05:48:34.462768 2749 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 05:48:34.462820 kubelet[2749]: I0912 05:48:34.462804 2749 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 05:48:34.463047 kubelet[2749]: I0912 05:48:34.463023 2749 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 05:48:34.464199 kubelet[2749]: I0912 05:48:34.464175 2749 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 05:48:34.468464 kubelet[2749]: I0912 05:48:34.468427 2749 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 05:48:34.472538 kubelet[2749]: I0912 05:48:34.472454 2749 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 05:48:34.478820 kubelet[2749]: I0912 05:48:34.478778 2749 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 05:48:34.479080 kubelet[2749]: I0912 05:48:34.479035 2749 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 05:48:34.479237 kubelet[2749]: I0912 05:48:34.479066 2749 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 05:48:34.479312 kubelet[2749]: I0912 05:48:34.479243 2749 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 05:48:34.479312 kubelet[2749]: I0912 05:48:34.479253 2749 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 05:48:34.479312 kubelet[2749]: I0912 05:48:34.479299 2749 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:34.479497 kubelet[2749]: I0912 05:48:34.479474 2749 kubelet.go:480] "Attempting to sync node with API server" Sep 12 05:48:34.479497 kubelet[2749]: I0912 05:48:34.479494 2749 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 05:48:34.479572 kubelet[2749]: I0912 05:48:34.479538 2749 kubelet.go:386] "Adding apiserver pod source" Sep 12 05:48:34.479572 kubelet[2749]: I0912 05:48:34.479558 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 05:48:34.482104 kubelet[2749]: I0912 05:48:34.480673 2749 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 05:48:34.482104 kubelet[2749]: I0912 05:48:34.481147 2749 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 05:48:34.488787 kubelet[2749]: I0912 05:48:34.488754 2749 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 05:48:34.488937 kubelet[2749]: I0912 05:48:34.488802 2749 server.go:1289] "Started kubelet" Sep 12 05:48:34.488937 kubelet[2749]: I0912 05:48:34.488887 2749 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 05:48:34.490548 kubelet[2749]: I0912 05:48:34.489772 2749 server.go:317] "Adding debug handlers to kubelet server" Sep 12 05:48:34.490548 kubelet[2749]: I0912 05:48:34.490403 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 05:48:34.490548 kubelet[2749]: I0912 05:48:34.490470 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 05:48:34.490795 kubelet[2749]: I0912 05:48:34.490733 2749 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 05:48:34.491078 kubelet[2749]: I0912 05:48:34.491024 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 05:48:34.491652 kubelet[2749]: I0912 05:48:34.491622 2749 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 05:48:34.491718 kubelet[2749]: I0912 05:48:34.491695 2749 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 05:48:34.491811 kubelet[2749]: I0912 05:48:34.491786 2749 reconciler.go:26] "Reconciler: start to sync state" Sep 12 05:48:34.494259 kubelet[2749]: I0912 05:48:34.494217 2749 factory.go:223] Registration of the systemd container factory successfully Sep 12 05:48:34.494649 kubelet[2749]: I0912 05:48:34.494361 2749 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 05:48:34.495710 kubelet[2749]: E0912 05:48:34.495682 2749 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 05:48:34.497711 kubelet[2749]: I0912 05:48:34.497685 2749 factory.go:223] Registration of the containerd container factory successfully Sep 12 05:48:34.510988 kubelet[2749]: I0912 05:48:34.510913 2749 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 05:48:34.512746 kubelet[2749]: I0912 05:48:34.512663 2749 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 05:48:34.512746 kubelet[2749]: I0912 05:48:34.512696 2749 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 05:48:34.512746 kubelet[2749]: I0912 05:48:34.512742 2749 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 05:48:34.512885 kubelet[2749]: I0912 05:48:34.512753 2749 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 05:48:34.513037 kubelet[2749]: E0912 05:48:34.512995 2749 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 05:48:34.534203 kubelet[2749]: I0912 05:48:34.534157 2749 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 05:48:34.534203 kubelet[2749]: I0912 05:48:34.534177 2749 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 05:48:34.534203 kubelet[2749]: I0912 05:48:34.534198 2749 state_mem.go:36] "Initialized new in-memory state store" Sep 12 05:48:34.534450 kubelet[2749]: I0912 05:48:34.534340 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 05:48:34.534450 kubelet[2749]: I0912 05:48:34.534365 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 05:48:34.534450 kubelet[2749]: I0912 05:48:34.534384 2749 policy_none.go:49] "None policy: Start" Sep 12 05:48:34.534450 kubelet[2749]: I0912 05:48:34.534393 2749 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 05:48:34.534450 kubelet[2749]: I0912 05:48:34.534404 2749 state_mem.go:35] "Initializing new in-memory state store" Sep 12 05:48:34.534687 kubelet[2749]: I0912 05:48:34.534531 2749 state_mem.go:75] "Updated machine memory state" Sep 12 05:48:34.546219 kubelet[2749]: E0912 05:48:34.546105 2749 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 05:48:34.546380 kubelet[2749]: I0912 05:48:34.546352 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 05:48:34.546436 kubelet[2749]: I0912 05:48:34.546385 2749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 05:48:34.546772 kubelet[2749]: I0912 05:48:34.546659 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 05:48:34.548902 kubelet[2749]: E0912 05:48:34.548855 2749 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 05:48:34.614345 kubelet[2749]: I0912 05:48:34.614303 2749 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:34.614345 kubelet[2749]: I0912 05:48:34.614331 2749 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:34.614345 kubelet[2749]: I0912 05:48:34.614364 2749 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:34.656686 kubelet[2749]: I0912 05:48:34.656547 2749 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 05:48:34.678853 kubelet[2749]: E0912 05:48:34.678794 2749 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:34.679282 kubelet[2749]: E0912 05:48:34.679010 2749 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:34.680964 kubelet[2749]: I0912 05:48:34.680906 2749 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 05:48:34.681267 kubelet[2749]: I0912 05:48:34.681217 2749 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 05:48:34.692673 kubelet[2749]: I0912 05:48:34.692625 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:34.692673 kubelet[2749]: I0912 05:48:34.692663 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:34.692673 kubelet[2749]: I0912 05:48:34.692686 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:34.692906 kubelet[2749]: I0912 05:48:34.692707 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:34.692906 kubelet[2749]: I0912 05:48:34.692766 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18bcd3cc5f3da8cb1b2fdbe664cc527e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18bcd3cc5f3da8cb1b2fdbe664cc527e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:34.692906 kubelet[2749]: I0912 05:48:34.692790 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18bcd3cc5f3da8cb1b2fdbe664cc527e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18bcd3cc5f3da8cb1b2fdbe664cc527e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:34.692906 kubelet[2749]: I0912 05:48:34.692807 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 05:48:34.692906 kubelet[2749]: I0912 05:48:34.692821 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 05:48:34.693072 kubelet[2749]: I0912 05:48:34.692847 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18bcd3cc5f3da8cb1b2fdbe664cc527e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18bcd3cc5f3da8cb1b2fdbe664cc527e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:34.978604 kubelet[2749]: E0912 05:48:34.978512 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:34.979453 kubelet[2749]: E0912 05:48:34.979196 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:34.979453 kubelet[2749]: E0912 05:48:34.979370 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:35.480170 kubelet[2749]: I0912 05:48:35.480116 2749 apiserver.go:52] "Watching apiserver" Sep 12 05:48:35.492065 kubelet[2749]: I0912 05:48:35.491998 2749 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 05:48:35.528554 kubelet[2749]: E0912 05:48:35.525802 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:35.528554 kubelet[2749]: I0912 05:48:35.526468 2749 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:35.528554 kubelet[2749]: E0912 05:48:35.526832 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:35.607009 kubelet[2749]: E0912 05:48:35.606909 2749 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 05:48:35.607248 kubelet[2749]: E0912 05:48:35.607165 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:35.659817 kubelet[2749]: I0912 05:48:35.659735 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.659707563 podStartE2EDuration="3.659707563s" podCreationTimestamp="2025-09-12 05:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:48:35.606984389 +0000 UTC m=+1.198074819" watchObservedRunningTime="2025-09-12 05:48:35.659707563 +0000 UTC m=+1.250797993" Sep 12 05:48:35.769073 kubelet[2749]: I0912 05:48:35.768871 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7688499439999998 podStartE2EDuration="1.768849944s" podCreationTimestamp="2025-09-12 05:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:48:35.660058121 +0000 UTC m=+1.251148551" watchObservedRunningTime="2025-09-12 05:48:35.768849944 +0000 UTC m=+1.359940374" Sep 12 05:48:35.777350 kubelet[2749]: I0912 05:48:35.777014 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.776991463 podStartE2EDuration="3.776991463s" podCreationTimestamp="2025-09-12 05:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:48:35.769029521 +0000 UTC m=+1.360119951" watchObservedRunningTime="2025-09-12 05:48:35.776991463 +0000 UTC m=+1.368081893" Sep 12 05:48:36.527484 kubelet[2749]: E0912 05:48:36.527413 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:36.527484 kubelet[2749]: E0912 05:48:36.527417 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:37.529125 kubelet[2749]: E0912 05:48:37.529089 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:40.002387 kubelet[2749]: E0912 05:48:40.002336 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:40.038738 kubelet[2749]: I0912 05:48:40.038678 2749 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 05:48:40.039028 containerd[1592]: time="2025-09-12T05:48:40.038985315Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 05:48:40.039469 kubelet[2749]: I0912 05:48:40.039143 2749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 05:48:40.533727 kubelet[2749]: E0912 05:48:40.533676 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:41.048987 systemd[1]: Created slice kubepods-besteffort-pod5038e715_8432_446e_a0a7_bfa9293ec195.slice - libcontainer container kubepods-besteffort-pod5038e715_8432_446e_a0a7_bfa9293ec195.slice. Sep 12 05:48:41.136011 kubelet[2749]: I0912 05:48:41.135960 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5038e715-8432-446e-a0a7-bfa9293ec195-kube-proxy\") pod \"kube-proxy-vslgf\" (UID: \"5038e715-8432-446e-a0a7-bfa9293ec195\") " pod="kube-system/kube-proxy-vslgf" Sep 12 05:48:41.136011 kubelet[2749]: I0912 05:48:41.136007 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5038e715-8432-446e-a0a7-bfa9293ec195-xtables-lock\") pod \"kube-proxy-vslgf\" (UID: \"5038e715-8432-446e-a0a7-bfa9293ec195\") " pod="kube-system/kube-proxy-vslgf" Sep 12 05:48:41.136011 kubelet[2749]: I0912 05:48:41.136028 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5038e715-8432-446e-a0a7-bfa9293ec195-lib-modules\") pod \"kube-proxy-vslgf\" (UID: \"5038e715-8432-446e-a0a7-bfa9293ec195\") " pod="kube-system/kube-proxy-vslgf" Sep 12 05:48:41.136679 kubelet[2749]: I0912 05:48:41.136048 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hxx6\" (UniqueName: \"kubernetes.io/projected/5038e715-8432-446e-a0a7-bfa9293ec195-kube-api-access-4hxx6\") pod \"kube-proxy-vslgf\" (UID: \"5038e715-8432-446e-a0a7-bfa9293ec195\") " pod="kube-system/kube-proxy-vslgf" Sep 12 05:48:41.364158 kubelet[2749]: E0912 05:48:41.363969 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:41.365268 containerd[1592]: time="2025-09-12T05:48:41.365214961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vslgf,Uid:5038e715-8432-446e-a0a7-bfa9293ec195,Namespace:kube-system,Attempt:0,}" Sep 12 05:48:41.638100 containerd[1592]: time="2025-09-12T05:48:41.637965287Z" level=info msg="connecting to shim c1df298fbd8620d33d444f7cd3e6328b054119ea1c76f67fd21983eb430a2f4c" address="unix:///run/containerd/s/5ac80b31c67ad09fa8836c3e7bf3415197ccd5102926c63432faf79b6357f174" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:41.638996 kubelet[2749]: I0912 05:48:41.638917 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc496\" (UniqueName: \"kubernetes.io/projected/9a3385b0-95fa-47f6-a7af-b79629c1b5d8-kube-api-access-tc496\") pod \"tigera-operator-755d956888-6jw68\" (UID: \"9a3385b0-95fa-47f6-a7af-b79629c1b5d8\") " pod="tigera-operator/tigera-operator-755d956888-6jw68" Sep 12 05:48:41.638996 kubelet[2749]: I0912 05:48:41.638958 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9a3385b0-95fa-47f6-a7af-b79629c1b5d8-var-lib-calico\") pod \"tigera-operator-755d956888-6jw68\" (UID: \"9a3385b0-95fa-47f6-a7af-b79629c1b5d8\") " pod="tigera-operator/tigera-operator-755d956888-6jw68" Sep 12 05:48:41.640676 systemd[1]: Created slice kubepods-besteffort-pod9a3385b0_95fa_47f6_a7af_b79629c1b5d8.slice - libcontainer container kubepods-besteffort-pod9a3385b0_95fa_47f6_a7af_b79629c1b5d8.slice. Sep 12 05:48:41.673765 systemd[1]: Started cri-containerd-c1df298fbd8620d33d444f7cd3e6328b054119ea1c76f67fd21983eb430a2f4c.scope - libcontainer container c1df298fbd8620d33d444f7cd3e6328b054119ea1c76f67fd21983eb430a2f4c. Sep 12 05:48:41.706433 containerd[1592]: time="2025-09-12T05:48:41.706387457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vslgf,Uid:5038e715-8432-446e-a0a7-bfa9293ec195,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1df298fbd8620d33d444f7cd3e6328b054119ea1c76f67fd21983eb430a2f4c\"" Sep 12 05:48:41.707198 kubelet[2749]: E0912 05:48:41.707155 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:41.713634 containerd[1592]: time="2025-09-12T05:48:41.713583151Z" level=info msg="CreateContainer within sandbox \"c1df298fbd8620d33d444f7cd3e6328b054119ea1c76f67fd21983eb430a2f4c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 05:48:41.726484 containerd[1592]: time="2025-09-12T05:48:41.726424749Z" level=info msg="Container 1a3b2d9124e407de5b9799d993817c7878242fd27c7d9b91e52db5299319135e: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:41.733165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757266998.mount: Deactivated successfully. Sep 12 05:48:41.740745 containerd[1592]: time="2025-09-12T05:48:41.740694042Z" level=info msg="CreateContainer within sandbox \"c1df298fbd8620d33d444f7cd3e6328b054119ea1c76f67fd21983eb430a2f4c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a3b2d9124e407de5b9799d993817c7878242fd27c7d9b91e52db5299319135e\"" Sep 12 05:48:41.741348 containerd[1592]: time="2025-09-12T05:48:41.741312479Z" level=info msg="StartContainer for \"1a3b2d9124e407de5b9799d993817c7878242fd27c7d9b91e52db5299319135e\"" Sep 12 05:48:41.743150 containerd[1592]: time="2025-09-12T05:48:41.743119463Z" level=info msg="connecting to shim 1a3b2d9124e407de5b9799d993817c7878242fd27c7d9b91e52db5299319135e" address="unix:///run/containerd/s/5ac80b31c67ad09fa8836c3e7bf3415197ccd5102926c63432faf79b6357f174" protocol=ttrpc version=3 Sep 12 05:48:41.780863 systemd[1]: Started cri-containerd-1a3b2d9124e407de5b9799d993817c7878242fd27c7d9b91e52db5299319135e.scope - libcontainer container 1a3b2d9124e407de5b9799d993817c7878242fd27c7d9b91e52db5299319135e. Sep 12 05:48:41.857203 containerd[1592]: time="2025-09-12T05:48:41.857139188Z" level=info msg="StartContainer for \"1a3b2d9124e407de5b9799d993817c7878242fd27c7d9b91e52db5299319135e\" returns successfully" Sep 12 05:48:41.944899 containerd[1592]: time="2025-09-12T05:48:41.944696926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6jw68,Uid:9a3385b0-95fa-47f6-a7af-b79629c1b5d8,Namespace:tigera-operator,Attempt:0,}" Sep 12 05:48:42.015481 containerd[1592]: time="2025-09-12T05:48:42.015416920Z" level=info msg="connecting to shim 0e0a5c1820b7c64001ad19369c65a463191245b8c18f6d89e354d8eae8e3a871" address="unix:///run/containerd/s/6d1ae13c01de8ebe61aa0e2af7d6a47e2f61ca3ca4170cc8b754b8b57f53aad3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:42.048741 systemd[1]: Started cri-containerd-0e0a5c1820b7c64001ad19369c65a463191245b8c18f6d89e354d8eae8e3a871.scope - libcontainer container 0e0a5c1820b7c64001ad19369c65a463191245b8c18f6d89e354d8eae8e3a871. Sep 12 05:48:42.203958 containerd[1592]: time="2025-09-12T05:48:42.203844185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6jw68,Uid:9a3385b0-95fa-47f6-a7af-b79629c1b5d8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e0a5c1820b7c64001ad19369c65a463191245b8c18f6d89e354d8eae8e3a871\"" Sep 12 05:48:42.205660 containerd[1592]: time="2025-09-12T05:48:42.205632783Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 05:48:42.540131 kubelet[2749]: E0912 05:48:42.540091 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:43.094995 kubelet[2749]: E0912 05:48:43.094958 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:43.242281 kubelet[2749]: I0912 05:48:43.242152 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vslgf" podStartSLOduration=2.242130459 podStartE2EDuration="2.242130459s" podCreationTimestamp="2025-09-12 05:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:48:42.551048125 +0000 UTC m=+8.142138555" watchObservedRunningTime="2025-09-12 05:48:43.242130459 +0000 UTC m=+8.833220899" Sep 12 05:48:43.541824 kubelet[2749]: E0912 05:48:43.541776 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:44.084665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241102773.mount: Deactivated successfully. Sep 12 05:48:46.220120 kubelet[2749]: E0912 05:48:46.220075 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:46.490944 containerd[1592]: time="2025-09-12T05:48:46.490818386Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:46.491476 containerd[1592]: time="2025-09-12T05:48:46.491440159Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 05:48:46.492592 containerd[1592]: time="2025-09-12T05:48:46.492565337Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:46.494495 containerd[1592]: time="2025-09-12T05:48:46.494452232Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:46.494980 containerd[1592]: time="2025-09-12T05:48:46.494951931Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.289289971s" Sep 12 05:48:46.495021 containerd[1592]: time="2025-09-12T05:48:46.494978846Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 05:48:46.499812 containerd[1592]: time="2025-09-12T05:48:46.499778317Z" level=info msg="CreateContainer within sandbox \"0e0a5c1820b7c64001ad19369c65a463191245b8c18f6d89e354d8eae8e3a871\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 05:48:46.508401 containerd[1592]: time="2025-09-12T05:48:46.508362133Z" level=info msg="Container c8c350b25b8e3bc7bdbeebadba2ceae14758445c932952ff949ce822f92f3c79: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:46.511947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564828320.mount: Deactivated successfully. Sep 12 05:48:46.514905 containerd[1592]: time="2025-09-12T05:48:46.514863892Z" level=info msg="CreateContainer within sandbox \"0e0a5c1820b7c64001ad19369c65a463191245b8c18f6d89e354d8eae8e3a871\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c8c350b25b8e3bc7bdbeebadba2ceae14758445c932952ff949ce822f92f3c79\"" Sep 12 05:48:46.515207 containerd[1592]: time="2025-09-12T05:48:46.515177984Z" level=info msg="StartContainer for \"c8c350b25b8e3bc7bdbeebadba2ceae14758445c932952ff949ce822f92f3c79\"" Sep 12 05:48:46.515984 containerd[1592]: time="2025-09-12T05:48:46.515960294Z" level=info msg="connecting to shim c8c350b25b8e3bc7bdbeebadba2ceae14758445c932952ff949ce822f92f3c79" address="unix:///run/containerd/s/6d1ae13c01de8ebe61aa0e2af7d6a47e2f61ca3ca4170cc8b754b8b57f53aad3" protocol=ttrpc version=3 Sep 12 05:48:46.547860 kubelet[2749]: E0912 05:48:46.547818 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:46.573651 systemd[1]: Started cri-containerd-c8c350b25b8e3bc7bdbeebadba2ceae14758445c932952ff949ce822f92f3c79.scope - libcontainer container c8c350b25b8e3bc7bdbeebadba2ceae14758445c932952ff949ce822f92f3c79. Sep 12 05:48:46.605234 containerd[1592]: time="2025-09-12T05:48:46.605189211Z" level=info msg="StartContainer for \"c8c350b25b8e3bc7bdbeebadba2ceae14758445c932952ff949ce822f92f3c79\" returns successfully" Sep 12 05:48:47.560918 kubelet[2749]: I0912 05:48:47.560825 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-6jw68" podStartSLOduration=2.270277865 podStartE2EDuration="6.560806739s" podCreationTimestamp="2025-09-12 05:48:41 +0000 UTC" firstStartedPulling="2025-09-12 05:48:42.205105871 +0000 UTC m=+7.796196301" lastFinishedPulling="2025-09-12 05:48:46.495634745 +0000 UTC m=+12.086725175" observedRunningTime="2025-09-12 05:48:47.560637155 +0000 UTC m=+13.151727595" watchObservedRunningTime="2025-09-12 05:48:47.560806739 +0000 UTC m=+13.151897169" Sep 12 05:48:49.073750 update_engine[1581]: I20250912 05:48:49.073649 1581 update_attempter.cc:509] Updating boot flags... Sep 12 05:48:52.147172 sudo[1809]: pam_unix(sudo:session): session closed for user root Sep 12 05:48:52.150062 sshd[1808]: Connection closed by 10.0.0.1 port 57580 Sep 12 05:48:52.151498 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Sep 12 05:48:52.159757 systemd[1]: sshd@6-10.0.0.17:22-10.0.0.1:57580.service: Deactivated successfully. Sep 12 05:48:52.163933 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 05:48:52.165799 systemd[1]: session-7.scope: Consumed 5.915s CPU time, 229.9M memory peak. Sep 12 05:48:52.167681 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Sep 12 05:48:52.170863 systemd-logind[1577]: Removed session 7. Sep 12 05:48:54.816574 systemd[1]: Created slice kubepods-besteffort-pode0cb7d0a_a7a8_477c_b1dc_9619e6b21be9.slice - libcontainer container kubepods-besteffort-pode0cb7d0a_a7a8_477c_b1dc_9619e6b21be9.slice. Sep 12 05:48:54.828224 kubelet[2749]: I0912 05:48:54.828135 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwx5\" (UniqueName: \"kubernetes.io/projected/e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9-kube-api-access-smwx5\") pod \"calico-typha-6c7cbff7c4-s2hzx\" (UID: \"e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9\") " pod="calico-system/calico-typha-6c7cbff7c4-s2hzx" Sep 12 05:48:54.828224 kubelet[2749]: I0912 05:48:54.828191 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9-tigera-ca-bundle\") pod \"calico-typha-6c7cbff7c4-s2hzx\" (UID: \"e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9\") " pod="calico-system/calico-typha-6c7cbff7c4-s2hzx" Sep 12 05:48:54.828224 kubelet[2749]: I0912 05:48:54.828220 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9-typha-certs\") pod \"calico-typha-6c7cbff7c4-s2hzx\" (UID: \"e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9\") " pod="calico-system/calico-typha-6c7cbff7c4-s2hzx" Sep 12 05:48:55.080816 systemd[1]: Created slice kubepods-besteffort-pod4f2ba26a_a2bd_4313_abc4_3c6db025a057.slice - libcontainer container kubepods-besteffort-pod4f2ba26a_a2bd_4313_abc4_3c6db025a057.slice. Sep 12 05:48:55.128312 kubelet[2749]: E0912 05:48:55.128253 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.128912 containerd[1592]: time="2025-09-12T05:48:55.128839241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c7cbff7c4-s2hzx,Uid:e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9,Namespace:calico-system,Attempt:0,}" Sep 12 05:48:55.130579 kubelet[2749]: I0912 05:48:55.130507 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-flexvol-driver-host\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130674 kubelet[2749]: I0912 05:48:55.130589 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-policysync\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130674 kubelet[2749]: I0912 05:48:55.130613 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-xtables-lock\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130674 kubelet[2749]: I0912 05:48:55.130629 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f2ba26a-a2bd-4313-abc4-3c6db025a057-tigera-ca-bundle\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130674 kubelet[2749]: I0912 05:48:55.130645 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-var-run-calico\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130674 kubelet[2749]: I0912 05:48:55.130662 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-cni-bin-dir\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130875 kubelet[2749]: I0912 05:48:55.130675 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbvc\" (UniqueName: \"kubernetes.io/projected/4f2ba26a-a2bd-4313-abc4-3c6db025a057-kube-api-access-2dbvc\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130875 kubelet[2749]: I0912 05:48:55.130742 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-cni-log-dir\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130875 kubelet[2749]: I0912 05:48:55.130783 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-cni-net-dir\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130875 kubelet[2749]: I0912 05:48:55.130803 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f2ba26a-a2bd-4313-abc4-3c6db025a057-node-certs\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.130875 kubelet[2749]: I0912 05:48:55.130831 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-lib-modules\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.131007 kubelet[2749]: I0912 05:48:55.130848 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f2ba26a-a2bd-4313-abc4-3c6db025a057-var-lib-calico\") pod \"calico-node-959q9\" (UID: \"4f2ba26a-a2bd-4313-abc4-3c6db025a057\") " pod="calico-system/calico-node-959q9" Sep 12 05:48:55.168429 containerd[1592]: time="2025-09-12T05:48:55.168368890Z" level=info msg="connecting to shim 949edd594f0f5b17a5c182595b8f10a53e2f5f45ac90ced408e7ea1c7a43bf75" address="unix:///run/containerd/s/a2a1c7019c7bed8b97d434757ee98dccd130ec8af039fd8fe97939ba14551022" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:55.196292 systemd[1]: Started cri-containerd-949edd594f0f5b17a5c182595b8f10a53e2f5f45ac90ced408e7ea1c7a43bf75.scope - libcontainer container 949edd594f0f5b17a5c182595b8f10a53e2f5f45ac90ced408e7ea1c7a43bf75. Sep 12 05:48:55.241604 kubelet[2749]: E0912 05:48:55.241409 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.241604 kubelet[2749]: W0912 05:48:55.241428 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.241604 kubelet[2749]: E0912 05:48:55.241451 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.241779 kubelet[2749]: E0912 05:48:55.241734 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.241779 kubelet[2749]: W0912 05:48:55.241753 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.241779 kubelet[2749]: E0912 05:48:55.241774 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.252180 containerd[1592]: time="2025-09-12T05:48:55.252130895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c7cbff7c4-s2hzx,Uid:e0cb7d0a-a7a8-477c-b1dc-9619e6b21be9,Namespace:calico-system,Attempt:0,} returns sandbox id \"949edd594f0f5b17a5c182595b8f10a53e2f5f45ac90ced408e7ea1c7a43bf75\"" Sep 12 05:48:55.252937 kubelet[2749]: E0912 05:48:55.252903 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:55.253610 containerd[1592]: time="2025-09-12T05:48:55.253586881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 05:48:55.378120 kubelet[2749]: E0912 05:48:55.377478 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tc6l" podUID="87ee9e6e-7669-4a36-a669-9a05a8ff4705" Sep 12 05:48:55.385402 containerd[1592]: time="2025-09-12T05:48:55.385112738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-959q9,Uid:4f2ba26a-a2bd-4313-abc4-3c6db025a057,Namespace:calico-system,Attempt:0,}" Sep 12 05:48:55.413418 kubelet[2749]: E0912 05:48:55.413375 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.413791 kubelet[2749]: W0912 05:48:55.413701 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.413791 kubelet[2749]: E0912 05:48:55.413738 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.414103 kubelet[2749]: E0912 05:48:55.414090 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.414258 kubelet[2749]: W0912 05:48:55.414172 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.414258 kubelet[2749]: E0912 05:48:55.414190 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.415560 containerd[1592]: time="2025-09-12T05:48:55.415305978Z" level=info msg="connecting to shim 6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934" address="unix:///run/containerd/s/3b29ca39f308f572e169b9ac08d2aa003973b13f021f47897ab8a8b80aac79b7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:48:55.418826 kubelet[2749]: E0912 05:48:55.415377 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.418826 kubelet[2749]: W0912 05:48:55.415389 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.418826 kubelet[2749]: E0912 05:48:55.418577 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.419138 kubelet[2749]: E0912 05:48:55.419001 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.419138 kubelet[2749]: W0912 05:48:55.419016 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.419138 kubelet[2749]: E0912 05:48:55.419029 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.419323 kubelet[2749]: E0912 05:48:55.419309 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.419483 kubelet[2749]: W0912 05:48:55.419378 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.419483 kubelet[2749]: E0912 05:48:55.419392 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.419652 kubelet[2749]: E0912 05:48:55.419631 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.419652 kubelet[2749]: W0912 05:48:55.419645 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.419836 kubelet[2749]: E0912 05:48:55.419657 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.419937 kubelet[2749]: E0912 05:48:55.419899 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.419937 kubelet[2749]: W0912 05:48:55.419916 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.419937 kubelet[2749]: E0912 05:48:55.419926 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.420140 kubelet[2749]: E0912 05:48:55.420128 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.420168 kubelet[2749]: W0912 05:48:55.420140 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.420168 kubelet[2749]: E0912 05:48:55.420152 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.420421 kubelet[2749]: E0912 05:48:55.420400 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.420421 kubelet[2749]: W0912 05:48:55.420417 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.420482 kubelet[2749]: E0912 05:48:55.420429 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.420694 kubelet[2749]: E0912 05:48:55.420661 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.420694 kubelet[2749]: W0912 05:48:55.420690 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.420767 kubelet[2749]: E0912 05:48:55.420703 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.420994 kubelet[2749]: E0912 05:48:55.420961 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.420994 kubelet[2749]: W0912 05:48:55.420977 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.420994 kubelet[2749]: E0912 05:48:55.420988 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.421253 kubelet[2749]: E0912 05:48:55.421234 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.421282 kubelet[2749]: W0912 05:48:55.421268 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.421314 kubelet[2749]: E0912 05:48:55.421281 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.421535 kubelet[2749]: E0912 05:48:55.421496 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.421535 kubelet[2749]: W0912 05:48:55.421510 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.421599 kubelet[2749]: E0912 05:48:55.421545 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.421792 kubelet[2749]: E0912 05:48:55.421771 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.421792 kubelet[2749]: W0912 05:48:55.421787 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.421857 kubelet[2749]: E0912 05:48:55.421798 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.422007 kubelet[2749]: E0912 05:48:55.421986 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.422007 kubelet[2749]: W0912 05:48:55.422000 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.422063 kubelet[2749]: E0912 05:48:55.422010 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.422238 kubelet[2749]: E0912 05:48:55.422214 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.422238 kubelet[2749]: W0912 05:48:55.422229 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.422238 kubelet[2749]: E0912 05:48:55.422240 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.422503 kubelet[2749]: E0912 05:48:55.422479 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.422503 kubelet[2749]: W0912 05:48:55.422495 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.422594 kubelet[2749]: E0912 05:48:55.422507 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.422849 kubelet[2749]: E0912 05:48:55.422829 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.422849 kubelet[2749]: W0912 05:48:55.422845 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.422930 kubelet[2749]: E0912 05:48:55.422858 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.423122 kubelet[2749]: E0912 05:48:55.423101 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.423122 kubelet[2749]: W0912 05:48:55.423118 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.423184 kubelet[2749]: E0912 05:48:55.423132 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.423373 kubelet[2749]: E0912 05:48:55.423353 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.423373 kubelet[2749]: W0912 05:48:55.423370 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.423437 kubelet[2749]: E0912 05:48:55.423382 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.433189 kubelet[2749]: E0912 05:48:55.433157 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.433189 kubelet[2749]: W0912 05:48:55.433176 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.433269 kubelet[2749]: E0912 05:48:55.433193 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.433480 kubelet[2749]: I0912 05:48:55.433457 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm954\" (UniqueName: \"kubernetes.io/projected/87ee9e6e-7669-4a36-a669-9a05a8ff4705-kube-api-access-mm954\") pod \"csi-node-driver-6tc6l\" (UID: \"87ee9e6e-7669-4a36-a669-9a05a8ff4705\") " pod="calico-system/csi-node-driver-6tc6l" Sep 12 05:48:55.433797 kubelet[2749]: E0912 05:48:55.433774 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.433797 kubelet[2749]: W0912 05:48:55.433792 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.433869 kubelet[2749]: E0912 05:48:55.433805 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.434061 kubelet[2749]: I0912 05:48:55.434038 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/87ee9e6e-7669-4a36-a669-9a05a8ff4705-registration-dir\") pod \"csi-node-driver-6tc6l\" (UID: \"87ee9e6e-7669-4a36-a669-9a05a8ff4705\") " pod="calico-system/csi-node-driver-6tc6l" Sep 12 05:48:55.434355 kubelet[2749]: E0912 05:48:55.434311 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.434355 kubelet[2749]: W0912 05:48:55.434328 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.434355 kubelet[2749]: E0912 05:48:55.434341 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.434355 kubelet[2749]: I0912 05:48:55.434364 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/87ee9e6e-7669-4a36-a669-9a05a8ff4705-varrun\") pod \"csi-node-driver-6tc6l\" (UID: \"87ee9e6e-7669-4a36-a669-9a05a8ff4705\") " pod="calico-system/csi-node-driver-6tc6l" Sep 12 05:48:55.434663 kubelet[2749]: E0912 05:48:55.434642 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.434663 kubelet[2749]: W0912 05:48:55.434658 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.434758 kubelet[2749]: E0912 05:48:55.434671 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.434758 kubelet[2749]: I0912 05:48:55.434703 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87ee9e6e-7669-4a36-a669-9a05a8ff4705-kubelet-dir\") pod \"csi-node-driver-6tc6l\" (UID: \"87ee9e6e-7669-4a36-a669-9a05a8ff4705\") " pod="calico-system/csi-node-driver-6tc6l" Sep 12 05:48:55.434975 kubelet[2749]: E0912 05:48:55.434944 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.434975 kubelet[2749]: W0912 05:48:55.434959 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.434975 kubelet[2749]: E0912 05:48:55.434970 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.435090 kubelet[2749]: I0912 05:48:55.435070 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/87ee9e6e-7669-4a36-a669-9a05a8ff4705-socket-dir\") pod \"csi-node-driver-6tc6l\" (UID: \"87ee9e6e-7669-4a36-a669-9a05a8ff4705\") " pod="calico-system/csi-node-driver-6tc6l" Sep 12 05:48:55.435366 kubelet[2749]: E0912 05:48:55.435337 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.435366 kubelet[2749]: W0912 05:48:55.435351 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.435366 kubelet[2749]: E0912 05:48:55.435363 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.435855 kubelet[2749]: E0912 05:48:55.435833 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.435855 kubelet[2749]: W0912 05:48:55.435847 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.435952 kubelet[2749]: E0912 05:48:55.435860 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.436157 kubelet[2749]: E0912 05:48:55.436137 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.436157 kubelet[2749]: W0912 05:48:55.436150 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.436253 kubelet[2749]: E0912 05:48:55.436162 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.436455 kubelet[2749]: E0912 05:48:55.436434 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.436455 kubelet[2749]: W0912 05:48:55.436448 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.436587 kubelet[2749]: E0912 05:48:55.436459 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.436813 kubelet[2749]: E0912 05:48:55.436782 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.436813 kubelet[2749]: W0912 05:48:55.436795 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.436813 kubelet[2749]: E0912 05:48:55.436807 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.437292 kubelet[2749]: E0912 05:48:55.437261 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.437292 kubelet[2749]: W0912 05:48:55.437274 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.437292 kubelet[2749]: E0912 05:48:55.437286 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.437654 kubelet[2749]: E0912 05:48:55.437635 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.437654 kubelet[2749]: W0912 05:48:55.437647 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.437966 kubelet[2749]: E0912 05:48:55.437937 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.438250 kubelet[2749]: E0912 05:48:55.438229 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.438250 kubelet[2749]: W0912 05:48:55.438242 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.438344 kubelet[2749]: E0912 05:48:55.438254 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.438711 kubelet[2749]: E0912 05:48:55.438690 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.438711 kubelet[2749]: W0912 05:48:55.438703 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.438804 kubelet[2749]: E0912 05:48:55.438715 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.439084 kubelet[2749]: E0912 05:48:55.439050 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.439084 kubelet[2749]: W0912 05:48:55.439064 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.439084 kubelet[2749]: E0912 05:48:55.439076 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.440650 systemd[1]: Started cri-containerd-6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934.scope - libcontainer container 6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934. Sep 12 05:48:55.511396 containerd[1592]: time="2025-09-12T05:48:55.511246791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-959q9,Uid:4f2ba26a-a2bd-4313-abc4-3c6db025a057,Namespace:calico-system,Attempt:0,} returns sandbox id \"6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934\"" Sep 12 05:48:55.536064 kubelet[2749]: E0912 05:48:55.536016 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.536064 kubelet[2749]: W0912 05:48:55.536042 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.536064 kubelet[2749]: E0912 05:48:55.536064 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.536348 kubelet[2749]: E0912 05:48:55.536322 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.536348 kubelet[2749]: W0912 05:48:55.536334 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.536348 kubelet[2749]: E0912 05:48:55.536343 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.536578 kubelet[2749]: E0912 05:48:55.536560 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.536578 kubelet[2749]: W0912 05:48:55.536571 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.536641 kubelet[2749]: E0912 05:48:55.536580 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.536954 kubelet[2749]: E0912 05:48:55.536920 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.536994 kubelet[2749]: W0912 05:48:55.536951 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.536994 kubelet[2749]: E0912 05:48:55.536976 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.537221 kubelet[2749]: E0912 05:48:55.537198 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.537221 kubelet[2749]: W0912 05:48:55.537210 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.537221 kubelet[2749]: E0912 05:48:55.537218 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.537413 kubelet[2749]: E0912 05:48:55.537398 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.537413 kubelet[2749]: W0912 05:48:55.537409 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.537461 kubelet[2749]: E0912 05:48:55.537418 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.537745 kubelet[2749]: E0912 05:48:55.537694 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.537745 kubelet[2749]: W0912 05:48:55.537723 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.537745 kubelet[2749]: E0912 05:48:55.537755 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.538009 kubelet[2749]: E0912 05:48:55.537988 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.538009 kubelet[2749]: W0912 05:48:55.537997 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.538058 kubelet[2749]: E0912 05:48:55.538005 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.538209 kubelet[2749]: E0912 05:48:55.538193 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.538209 kubelet[2749]: W0912 05:48:55.538206 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.538285 kubelet[2749]: E0912 05:48:55.538215 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.538403 kubelet[2749]: E0912 05:48:55.538377 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.538403 kubelet[2749]: W0912 05:48:55.538388 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.538403 kubelet[2749]: E0912 05:48:55.538396 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.538616 kubelet[2749]: E0912 05:48:55.538592 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.538616 kubelet[2749]: W0912 05:48:55.538604 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.538616 kubelet[2749]: E0912 05:48:55.538613 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.538864 kubelet[2749]: E0912 05:48:55.538847 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.538864 kubelet[2749]: W0912 05:48:55.538858 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.538919 kubelet[2749]: E0912 05:48:55.538866 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.546075 kubelet[2749]: E0912 05:48:55.546045 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.546075 kubelet[2749]: W0912 05:48:55.546069 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.546167 kubelet[2749]: E0912 05:48:55.546089 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.546355 kubelet[2749]: E0912 05:48:55.546316 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.546355 kubelet[2749]: W0912 05:48:55.546328 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.546355 kubelet[2749]: E0912 05:48:55.546337 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.546585 kubelet[2749]: E0912 05:48:55.546566 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.546585 kubelet[2749]: W0912 05:48:55.546578 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.546654 kubelet[2749]: E0912 05:48:55.546590 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.546812 kubelet[2749]: E0912 05:48:55.546790 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.546853 kubelet[2749]: W0912 05:48:55.546814 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.546853 kubelet[2749]: E0912 05:48:55.546824 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.547056 kubelet[2749]: E0912 05:48:55.547038 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.547056 kubelet[2749]: W0912 05:48:55.547050 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.547122 kubelet[2749]: E0912 05:48:55.547059 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.547279 kubelet[2749]: E0912 05:48:55.547251 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.547279 kubelet[2749]: W0912 05:48:55.547265 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.547279 kubelet[2749]: E0912 05:48:55.547276 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.547454 kubelet[2749]: E0912 05:48:55.547439 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.547454 kubelet[2749]: W0912 05:48:55.547450 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.547504 kubelet[2749]: E0912 05:48:55.547459 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.547885 kubelet[2749]: E0912 05:48:55.547774 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.547885 kubelet[2749]: W0912 05:48:55.547793 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.547885 kubelet[2749]: E0912 05:48:55.547807 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.548095 kubelet[2749]: E0912 05:48:55.548018 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.548095 kubelet[2749]: W0912 05:48:55.548028 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.548095 kubelet[2749]: E0912 05:48:55.548037 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.548290 kubelet[2749]: E0912 05:48:55.548269 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.548290 kubelet[2749]: W0912 05:48:55.548287 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.548347 kubelet[2749]: E0912 05:48:55.548300 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.548617 kubelet[2749]: E0912 05:48:55.548597 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.548617 kubelet[2749]: W0912 05:48:55.548613 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.548683 kubelet[2749]: E0912 05:48:55.548626 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.548943 kubelet[2749]: E0912 05:48:55.548924 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.548943 kubelet[2749]: W0912 05:48:55.548939 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.549001 kubelet[2749]: E0912 05:48:55.548953 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.549206 kubelet[2749]: E0912 05:48:55.549188 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.549206 kubelet[2749]: W0912 05:48:55.549202 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.549274 kubelet[2749]: E0912 05:48:55.549214 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:55.761014 kubelet[2749]: E0912 05:48:55.760963 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:55.761014 kubelet[2749]: W0912 05:48:55.760993 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:55.761014 kubelet[2749]: E0912 05:48:55.761019 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:56.928160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861355538.mount: Deactivated successfully. Sep 12 05:48:57.513723 kubelet[2749]: E0912 05:48:57.513642 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tc6l" podUID="87ee9e6e-7669-4a36-a669-9a05a8ff4705" Sep 12 05:48:58.725724 containerd[1592]: time="2025-09-12T05:48:58.725572192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 05:48:58.726618 containerd[1592]: time="2025-09-12T05:48:58.725958267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:58.733237 containerd[1592]: time="2025-09-12T05:48:58.733172003Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:58.735795 containerd[1592]: time="2025-09-12T05:48:58.735763226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:48:58.736339 containerd[1592]: time="2025-09-12T05:48:58.736298106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.482681443s" Sep 12 05:48:58.736406 containerd[1592]: time="2025-09-12T05:48:58.736341363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 05:48:58.737544 containerd[1592]: time="2025-09-12T05:48:58.737471340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 05:48:58.752063 containerd[1592]: time="2025-09-12T05:48:58.752010389Z" level=info msg="CreateContainer within sandbox \"949edd594f0f5b17a5c182595b8f10a53e2f5f45ac90ced408e7ea1c7a43bf75\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 05:48:58.761509 containerd[1592]: time="2025-09-12T05:48:58.761459695Z" level=info msg="Container a145d8a923f43cda3e967244c578fba0436eaafb44a02336fd3ed7056d58527f: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:48:58.774946 containerd[1592]: time="2025-09-12T05:48:58.774871652Z" level=info msg="CreateContainer within sandbox \"949edd594f0f5b17a5c182595b8f10a53e2f5f45ac90ced408e7ea1c7a43bf75\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a145d8a923f43cda3e967244c578fba0436eaafb44a02336fd3ed7056d58527f\"" Sep 12 05:48:58.775649 containerd[1592]: time="2025-09-12T05:48:58.775610565Z" level=info msg="StartContainer for \"a145d8a923f43cda3e967244c578fba0436eaafb44a02336fd3ed7056d58527f\"" Sep 12 05:48:58.776792 containerd[1592]: time="2025-09-12T05:48:58.776750921Z" level=info msg="connecting to shim a145d8a923f43cda3e967244c578fba0436eaafb44a02336fd3ed7056d58527f" address="unix:///run/containerd/s/a2a1c7019c7bed8b97d434757ee98dccd130ec8af039fd8fe97939ba14551022" protocol=ttrpc version=3 Sep 12 05:48:58.798699 systemd[1]: Started cri-containerd-a145d8a923f43cda3e967244c578fba0436eaafb44a02336fd3ed7056d58527f.scope - libcontainer container a145d8a923f43cda3e967244c578fba0436eaafb44a02336fd3ed7056d58527f. Sep 12 05:48:58.859237 containerd[1592]: time="2025-09-12T05:48:58.859173715Z" level=info msg="StartContainer for \"a145d8a923f43cda3e967244c578fba0436eaafb44a02336fd3ed7056d58527f\" returns successfully" Sep 12 05:48:59.513991 kubelet[2749]: E0912 05:48:59.513905 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tc6l" podUID="87ee9e6e-7669-4a36-a669-9a05a8ff4705" Sep 12 05:48:59.580942 kubelet[2749]: E0912 05:48:59.580896 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:48:59.648740 kubelet[2749]: E0912 05:48:59.648690 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.648740 kubelet[2749]: W0912 05:48:59.648718 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.648740 kubelet[2749]: E0912 05:48:59.648745 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.648953 kubelet[2749]: E0912 05:48:59.648913 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.648953 kubelet[2749]: W0912 05:48:59.648921 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.648953 kubelet[2749]: E0912 05:48:59.648929 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.649089 kubelet[2749]: E0912 05:48:59.649075 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.649089 kubelet[2749]: W0912 05:48:59.649085 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.649171 kubelet[2749]: E0912 05:48:59.649092 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.649444 kubelet[2749]: E0912 05:48:59.649419 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.649444 kubelet[2749]: W0912 05:48:59.649432 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.649444 kubelet[2749]: E0912 05:48:59.649443 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.649710 kubelet[2749]: E0912 05:48:59.649685 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.649710 kubelet[2749]: W0912 05:48:59.649699 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.649840 kubelet[2749]: E0912 05:48:59.649718 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.649913 kubelet[2749]: E0912 05:48:59.649887 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.649913 kubelet[2749]: W0912 05:48:59.649897 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.649913 kubelet[2749]: E0912 05:48:59.649906 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.650093 kubelet[2749]: E0912 05:48:59.650078 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.650093 kubelet[2749]: W0912 05:48:59.650092 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.650093 kubelet[2749]: E0912 05:48:59.650102 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.650340 kubelet[2749]: E0912 05:48:59.650321 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.650340 kubelet[2749]: W0912 05:48:59.650336 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.650418 kubelet[2749]: E0912 05:48:59.650349 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.650673 kubelet[2749]: E0912 05:48:59.650554 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.650673 kubelet[2749]: W0912 05:48:59.650568 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.650673 kubelet[2749]: E0912 05:48:59.650578 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.650855 kubelet[2749]: E0912 05:48:59.650822 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.650855 kubelet[2749]: W0912 05:48:59.650834 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.650976 kubelet[2749]: E0912 05:48:59.650863 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.651035 kubelet[2749]: E0912 05:48:59.651026 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.651035 kubelet[2749]: W0912 05:48:59.651033 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.651128 kubelet[2749]: E0912 05:48:59.651042 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.651264 kubelet[2749]: E0912 05:48:59.651247 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.651264 kubelet[2749]: W0912 05:48:59.651261 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.651324 kubelet[2749]: E0912 05:48:59.651270 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.651486 kubelet[2749]: E0912 05:48:59.651472 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.651486 kubelet[2749]: W0912 05:48:59.651483 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.651587 kubelet[2749]: E0912 05:48:59.651491 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.651733 kubelet[2749]: E0912 05:48:59.651719 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.651733 kubelet[2749]: W0912 05:48:59.651730 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.651801 kubelet[2749]: E0912 05:48:59.651740 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.651919 kubelet[2749]: E0912 05:48:59.651889 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.651919 kubelet[2749]: W0912 05:48:59.651899 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.651919 kubelet[2749]: E0912 05:48:59.651910 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.674732 kubelet[2749]: E0912 05:48:59.674703 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.674732 kubelet[2749]: W0912 05:48:59.674720 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.674732 kubelet[2749]: E0912 05:48:59.674735 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.674981 kubelet[2749]: E0912 05:48:59.674959 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.674981 kubelet[2749]: W0912 05:48:59.674972 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.675038 kubelet[2749]: E0912 05:48:59.674983 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.675200 kubelet[2749]: E0912 05:48:59.675184 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.675200 kubelet[2749]: W0912 05:48:59.675194 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.675266 kubelet[2749]: E0912 05:48:59.675202 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.675413 kubelet[2749]: E0912 05:48:59.675391 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.675413 kubelet[2749]: W0912 05:48:59.675402 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.675413 kubelet[2749]: E0912 05:48:59.675410 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.675640 kubelet[2749]: E0912 05:48:59.675619 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.675640 kubelet[2749]: W0912 05:48:59.675633 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.675695 kubelet[2749]: E0912 05:48:59.675645 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.675865 kubelet[2749]: E0912 05:48:59.675850 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.675865 kubelet[2749]: W0912 05:48:59.675862 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.675919 kubelet[2749]: E0912 05:48:59.675873 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.676118 kubelet[2749]: E0912 05:48:59.676095 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.676152 kubelet[2749]: W0912 05:48:59.676118 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.676152 kubelet[2749]: E0912 05:48:59.676129 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.676341 kubelet[2749]: E0912 05:48:59.676327 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.676341 kubelet[2749]: W0912 05:48:59.676339 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.676394 kubelet[2749]: E0912 05:48:59.676350 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.676568 kubelet[2749]: E0912 05:48:59.676555 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.676602 kubelet[2749]: W0912 05:48:59.676579 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.676602 kubelet[2749]: E0912 05:48:59.676589 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.676802 kubelet[2749]: E0912 05:48:59.676787 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.676828 kubelet[2749]: W0912 05:48:59.676801 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.676828 kubelet[2749]: E0912 05:48:59.676812 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.677042 kubelet[2749]: E0912 05:48:59.677028 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.677074 kubelet[2749]: W0912 05:48:59.677041 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.677074 kubelet[2749]: E0912 05:48:59.677052 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.677279 kubelet[2749]: E0912 05:48:59.677263 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.677279 kubelet[2749]: W0912 05:48:59.677277 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.677333 kubelet[2749]: E0912 05:48:59.677289 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.677500 kubelet[2749]: E0912 05:48:59.677488 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.677500 kubelet[2749]: W0912 05:48:59.677498 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.677561 kubelet[2749]: E0912 05:48:59.677507 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.677783 kubelet[2749]: E0912 05:48:59.677755 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.677783 kubelet[2749]: W0912 05:48:59.677772 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.677783 kubelet[2749]: E0912 05:48:59.677782 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.677947 kubelet[2749]: E0912 05:48:59.677933 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.677947 kubelet[2749]: W0912 05:48:59.677941 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.677999 kubelet[2749]: E0912 05:48:59.677949 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.678134 kubelet[2749]: E0912 05:48:59.678121 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.678134 kubelet[2749]: W0912 05:48:59.678130 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.678189 kubelet[2749]: E0912 05:48:59.678138 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.678437 kubelet[2749]: E0912 05:48:59.678422 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.678437 kubelet[2749]: W0912 05:48:59.678434 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.678495 kubelet[2749]: E0912 05:48:59.678445 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:48:59.678652 kubelet[2749]: E0912 05:48:59.678640 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 05:48:59.678652 kubelet[2749]: W0912 05:48:59.678651 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 05:48:59.678694 kubelet[2749]: E0912 05:48:59.678659 2749 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 05:49:00.388831 containerd[1592]: time="2025-09-12T05:49:00.388756457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:00.389821 containerd[1592]: time="2025-09-12T05:49:00.389761413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 05:49:00.391175 containerd[1592]: time="2025-09-12T05:49:00.391133007Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:00.393348 containerd[1592]: time="2025-09-12T05:49:00.393310412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:00.393977 containerd[1592]: time="2025-09-12T05:49:00.393910174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.656350417s" Sep 12 05:49:00.393977 containerd[1592]: time="2025-09-12T05:49:00.393962598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 05:49:00.400411 containerd[1592]: time="2025-09-12T05:49:00.400370498Z" level=info msg="CreateContainer within sandbox \"6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 05:49:00.410834 containerd[1592]: time="2025-09-12T05:49:00.410762133Z" level=info msg="Container 391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:00.420379 containerd[1592]: time="2025-09-12T05:49:00.420327491Z" level=info msg="CreateContainer within sandbox \"6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0\"" Sep 12 05:49:00.421035 containerd[1592]: time="2025-09-12T05:49:00.420937893Z" level=info msg="StartContainer for \"391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0\"" Sep 12 05:49:00.422476 containerd[1592]: time="2025-09-12T05:49:00.422419402Z" level=info msg="connecting to shim 391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0" address="unix:///run/containerd/s/3b29ca39f308f572e169b9ac08d2aa003973b13f021f47897ab8a8b80aac79b7" protocol=ttrpc version=3 Sep 12 05:49:00.447951 systemd[1]: Started cri-containerd-391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0.scope - libcontainer container 391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0. Sep 12 05:49:00.531785 systemd[1]: cri-containerd-391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0.scope: Deactivated successfully. Sep 12 05:49:00.532375 systemd[1]: cri-containerd-391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0.scope: Consumed 42ms CPU time, 6.4M memory peak, 3.6M written to disk. Sep 12 05:49:00.534896 containerd[1592]: time="2025-09-12T05:49:00.534836096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0\" id:\"391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0\" pid:3461 exited_at:{seconds:1757656140 nanos:534059948}" Sep 12 05:49:00.697764 containerd[1592]: time="2025-09-12T05:49:00.697311288Z" level=info msg="received exit event container_id:\"391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0\" id:\"391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0\" pid:3461 exited_at:{seconds:1757656140 nanos:534059948}" Sep 12 05:49:00.700620 containerd[1592]: time="2025-09-12T05:49:00.700556953Z" level=info msg="StartContainer for \"391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0\" returns successfully" Sep 12 05:49:00.701307 kubelet[2749]: I0912 05:49:00.701174 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:49:00.702251 kubelet[2749]: E0912 05:49:00.701508 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:00.727216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-391b9abb8ac24400bb7e9a7e3a1e0c85a4dbb192385763fd35afdb10beda3eb0-rootfs.mount: Deactivated successfully. Sep 12 05:49:01.513794 kubelet[2749]: E0912 05:49:01.513735 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tc6l" podUID="87ee9e6e-7669-4a36-a669-9a05a8ff4705" Sep 12 05:49:01.705865 containerd[1592]: time="2025-09-12T05:49:01.705818966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 05:49:01.900527 kubelet[2749]: I0912 05:49:01.900368 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c7cbff7c4-s2hzx" podStartSLOduration=4.416446526 podStartE2EDuration="7.900348763s" podCreationTimestamp="2025-09-12 05:48:54 +0000 UTC" firstStartedPulling="2025-09-12 05:48:55.253338374 +0000 UTC m=+20.844428804" lastFinishedPulling="2025-09-12 05:48:58.73724061 +0000 UTC m=+24.328331041" observedRunningTime="2025-09-12 05:48:59.59420681 +0000 UTC m=+25.185297260" watchObservedRunningTime="2025-09-12 05:49:01.900348763 +0000 UTC m=+27.491439193" Sep 12 05:49:03.239246 kubelet[2749]: I0912 05:49:03.238996 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:49:03.239749 kubelet[2749]: E0912 05:49:03.239592 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:03.514189 kubelet[2749]: E0912 05:49:03.514009 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tc6l" podUID="87ee9e6e-7669-4a36-a669-9a05a8ff4705" Sep 12 05:49:03.708111 kubelet[2749]: E0912 05:49:03.708064 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:05.513392 kubelet[2749]: E0912 05:49:05.513271 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6tc6l" podUID="87ee9e6e-7669-4a36-a669-9a05a8ff4705" Sep 12 05:49:05.545555 containerd[1592]: time="2025-09-12T05:49:05.545468369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:05.546143 containerd[1592]: time="2025-09-12T05:49:05.546115571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 05:49:05.547215 containerd[1592]: time="2025-09-12T05:49:05.547166166Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:05.550233 containerd[1592]: time="2025-09-12T05:49:05.550185267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:05.550953 containerd[1592]: time="2025-09-12T05:49:05.550898840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.845039913s" Sep 12 05:49:05.551004 containerd[1592]: time="2025-09-12T05:49:05.550957077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 05:49:05.560686 containerd[1592]: time="2025-09-12T05:49:05.560641599Z" level=info msg="CreateContainer within sandbox \"6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 05:49:05.573167 containerd[1592]: time="2025-09-12T05:49:05.573130915Z" level=info msg="Container fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:05.589171 containerd[1592]: time="2025-09-12T05:49:05.589116888Z" level=info msg="CreateContainer within sandbox \"6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4\"" Sep 12 05:49:05.589975 containerd[1592]: time="2025-09-12T05:49:05.589937295Z" level=info msg="StartContainer for \"fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4\"" Sep 12 05:49:05.591419 containerd[1592]: time="2025-09-12T05:49:05.591390740Z" level=info msg="connecting to shim fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4" address="unix:///run/containerd/s/3b29ca39f308f572e169b9ac08d2aa003973b13f021f47897ab8a8b80aac79b7" protocol=ttrpc version=3 Sep 12 05:49:05.618685 systemd[1]: Started cri-containerd-fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4.scope - libcontainer container fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4. Sep 12 05:49:05.667397 containerd[1592]: time="2025-09-12T05:49:05.667347908Z" level=info msg="StartContainer for \"fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4\" returns successfully" Sep 12 05:49:07.080477 systemd[1]: cri-containerd-fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4.scope: Deactivated successfully. Sep 12 05:49:07.081191 systemd[1]: cri-containerd-fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4.scope: Consumed 623ms CPU time, 178M memory peak, 3.7M read from disk, 171.3M written to disk. Sep 12 05:49:07.082965 containerd[1592]: time="2025-09-12T05:49:07.082893026Z" level=info msg="received exit event container_id:\"fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4\" id:\"fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4\" pid:3523 exited_at:{seconds:1757656147 nanos:82621552}" Sep 12 05:49:07.083624 containerd[1592]: time="2025-09-12T05:49:07.083037940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4\" id:\"fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4\" pid:3523 exited_at:{seconds:1757656147 nanos:82621552}" Sep 12 05:49:07.107057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fded4f03c75742bc18d8a3c5b2654806c6a6e3d3d2be9e6c3935f054fad501c4-rootfs.mount: Deactivated successfully. Sep 12 05:49:07.153700 kubelet[2749]: I0912 05:49:07.153655 2749 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 05:49:07.453988 kubelet[2749]: I0912 05:49:07.453853 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58gmb\" (UniqueName: \"kubernetes.io/projected/d32fafcb-d2ec-415d-8d25-a8f4e903e286-kube-api-access-58gmb\") pod \"coredns-674b8bbfcf-277rx\" (UID: \"d32fafcb-d2ec-415d-8d25-a8f4e903e286\") " pod="kube-system/coredns-674b8bbfcf-277rx" Sep 12 05:49:07.453988 kubelet[2749]: I0912 05:49:07.453902 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a65f0a-bba2-43b0-970d-51e842e79f55-config-volume\") pod \"coredns-674b8bbfcf-jhdrh\" (UID: \"81a65f0a-bba2-43b0-970d-51e842e79f55\") " pod="kube-system/coredns-674b8bbfcf-jhdrh" Sep 12 05:49:07.453988 kubelet[2749]: I0912 05:49:07.453927 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d32fafcb-d2ec-415d-8d25-a8f4e903e286-config-volume\") pod \"coredns-674b8bbfcf-277rx\" (UID: \"d32fafcb-d2ec-415d-8d25-a8f4e903e286\") " pod="kube-system/coredns-674b8bbfcf-277rx" Sep 12 05:49:07.453988 kubelet[2749]: I0912 05:49:07.453958 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4j8\" (UniqueName: \"kubernetes.io/projected/81a65f0a-bba2-43b0-970d-51e842e79f55-kube-api-access-mf4j8\") pod \"coredns-674b8bbfcf-jhdrh\" (UID: \"81a65f0a-bba2-43b0-970d-51e842e79f55\") " pod="kube-system/coredns-674b8bbfcf-jhdrh" Sep 12 05:49:07.461358 systemd[1]: Created slice kubepods-burstable-podd32fafcb_d2ec_415d_8d25_a8f4e903e286.slice - libcontainer container kubepods-burstable-podd32fafcb_d2ec_415d_8d25_a8f4e903e286.slice. Sep 12 05:49:07.469833 systemd[1]: Created slice kubepods-burstable-pod81a65f0a_bba2_43b0_970d_51e842e79f55.slice - libcontainer container kubepods-burstable-pod81a65f0a_bba2_43b0_970d_51e842e79f55.slice. Sep 12 05:49:07.475558 systemd[1]: Created slice kubepods-besteffort-podb8cba886_abb2_433c_8204_68b288d36ff7.slice - libcontainer container kubepods-besteffort-podb8cba886_abb2_433c_8204_68b288d36ff7.slice. Sep 12 05:49:07.483264 systemd[1]: Created slice kubepods-besteffort-pod259ac072_0726_43da_9e00_d086d6ab9458.slice - libcontainer container kubepods-besteffort-pod259ac072_0726_43da_9e00_d086d6ab9458.slice. Sep 12 05:49:07.491599 systemd[1]: Created slice kubepods-besteffort-pod83106f83_c82f_4921_81e2_efaf85519998.slice - libcontainer container kubepods-besteffort-pod83106f83_c82f_4921_81e2_efaf85519998.slice. Sep 12 05:49:07.499159 systemd[1]: Created slice kubepods-besteffort-pod41b672d4_ddc5_4ee1_a2ad_72eb60b23a61.slice - libcontainer container kubepods-besteffort-pod41b672d4_ddc5_4ee1_a2ad_72eb60b23a61.slice. Sep 12 05:49:07.504542 systemd[1]: Created slice kubepods-besteffort-podbb443f83_14a6_4513_8886_5ac39261925c.slice - libcontainer container kubepods-besteffort-podbb443f83_14a6_4513_8886_5ac39261925c.slice. Sep 12 05:49:07.521905 systemd[1]: Created slice kubepods-besteffort-pod87ee9e6e_7669_4a36_a669_9a05a8ff4705.slice - libcontainer container kubepods-besteffort-pod87ee9e6e_7669_4a36_a669_9a05a8ff4705.slice. Sep 12 05:49:07.525622 containerd[1592]: time="2025-09-12T05:49:07.525500056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tc6l,Uid:87ee9e6e-7669-4a36-a669-9a05a8ff4705,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:07.597349 containerd[1592]: time="2025-09-12T05:49:07.597270723Z" level=error msg="Failed to destroy network for sandbox \"aba4456ccc105feb1b0228ede55105e939642690470cc76ba39befbcb109e92a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.599637 systemd[1]: run-netns-cni\x2df1491f7c\x2db917\x2dfba4\x2da1c0\x2de73bdfde9e9d.mount: Deactivated successfully. Sep 12 05:49:07.600354 containerd[1592]: time="2025-09-12T05:49:07.600143531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tc6l,Uid:87ee9e6e-7669-4a36-a669-9a05a8ff4705,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba4456ccc105feb1b0228ede55105e939642690470cc76ba39befbcb109e92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.600577 kubelet[2749]: E0912 05:49:07.600422 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba4456ccc105feb1b0228ede55105e939642690470cc76ba39befbcb109e92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.600577 kubelet[2749]: E0912 05:49:07.600495 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba4456ccc105feb1b0228ede55105e939642690470cc76ba39befbcb109e92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6tc6l" Sep 12 05:49:07.600577 kubelet[2749]: E0912 05:49:07.600552 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba4456ccc105feb1b0228ede55105e939642690470cc76ba39befbcb109e92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6tc6l" Sep 12 05:49:07.600700 kubelet[2749]: E0912 05:49:07.600611 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6tc6l_calico-system(87ee9e6e-7669-4a36-a669-9a05a8ff4705)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6tc6l_calico-system(87ee9e6e-7669-4a36-a669-9a05a8ff4705)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aba4456ccc105feb1b0228ede55105e939642690470cc76ba39befbcb109e92a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6tc6l" podUID="87ee9e6e-7669-4a36-a669-9a05a8ff4705" Sep 12 05:49:07.655701 kubelet[2749]: I0912 05:49:07.655631 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/259ac072-0726-43da-9e00-d086d6ab9458-whisker-backend-key-pair\") pod \"whisker-597dd5dc8b-zs456\" (UID: \"259ac072-0726-43da-9e00-d086d6ab9458\") " pod="calico-system/whisker-597dd5dc8b-zs456" Sep 12 05:49:07.655701 kubelet[2749]: I0912 05:49:07.655674 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bb443f83-14a6-4513-8886-5ac39261925c-goldmane-key-pair\") pod \"goldmane-54d579b49d-846w2\" (UID: \"bb443f83-14a6-4513-8886-5ac39261925c\") " pod="calico-system/goldmane-54d579b49d-846w2" Sep 12 05:49:07.655701 kubelet[2749]: I0912 05:49:07.655702 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8cba886-abb2-433c-8204-68b288d36ff7-calico-apiserver-certs\") pod \"calico-apiserver-74859f4b68-5q8ng\" (UID: \"b8cba886-abb2-433c-8204-68b288d36ff7\") " pod="calico-apiserver/calico-apiserver-74859f4b68-5q8ng" Sep 12 05:49:07.655701 kubelet[2749]: I0912 05:49:07.655719 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb443f83-14a6-4513-8886-5ac39261925c-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-846w2\" (UID: \"bb443f83-14a6-4513-8886-5ac39261925c\") " pod="calico-system/goldmane-54d579b49d-846w2" Sep 12 05:49:07.655972 kubelet[2749]: I0912 05:49:07.655735 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f884v\" (UniqueName: \"kubernetes.io/projected/41b672d4-ddc5-4ee1-a2ad-72eb60b23a61-kube-api-access-f884v\") pod \"calico-apiserver-74859f4b68-5566g\" (UID: \"41b672d4-ddc5-4ee1-a2ad-72eb60b23a61\") " pod="calico-apiserver/calico-apiserver-74859f4b68-5566g" Sep 12 05:49:07.655972 kubelet[2749]: I0912 05:49:07.655753 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259ac072-0726-43da-9e00-d086d6ab9458-whisker-ca-bundle\") pod \"whisker-597dd5dc8b-zs456\" (UID: \"259ac072-0726-43da-9e00-d086d6ab9458\") " pod="calico-system/whisker-597dd5dc8b-zs456" Sep 12 05:49:07.655972 kubelet[2749]: I0912 05:49:07.655767 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27ljd\" (UniqueName: \"kubernetes.io/projected/259ac072-0726-43da-9e00-d086d6ab9458-kube-api-access-27ljd\") pod \"whisker-597dd5dc8b-zs456\" (UID: \"259ac072-0726-43da-9e00-d086d6ab9458\") " pod="calico-system/whisker-597dd5dc8b-zs456" Sep 12 05:49:07.655972 kubelet[2749]: I0912 05:49:07.655784 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83106f83-c82f-4921-81e2-efaf85519998-tigera-ca-bundle\") pod \"calico-kube-controllers-677f4f4f8f-9g6nc\" (UID: \"83106f83-c82f-4921-81e2-efaf85519998\") " pod="calico-system/calico-kube-controllers-677f4f4f8f-9g6nc" Sep 12 05:49:07.655972 kubelet[2749]: I0912 05:49:07.655799 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfd6\" (UniqueName: \"kubernetes.io/projected/83106f83-c82f-4921-81e2-efaf85519998-kube-api-access-6lfd6\") pod \"calico-kube-controllers-677f4f4f8f-9g6nc\" (UID: \"83106f83-c82f-4921-81e2-efaf85519998\") " pod="calico-system/calico-kube-controllers-677f4f4f8f-9g6nc" Sep 12 05:49:07.656095 kubelet[2749]: I0912 05:49:07.655818 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb443f83-14a6-4513-8886-5ac39261925c-config\") pod \"goldmane-54d579b49d-846w2\" (UID: \"bb443f83-14a6-4513-8886-5ac39261925c\") " pod="calico-system/goldmane-54d579b49d-846w2" Sep 12 05:49:07.656095 kubelet[2749]: I0912 05:49:07.655834 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqg4w\" (UniqueName: \"kubernetes.io/projected/bb443f83-14a6-4513-8886-5ac39261925c-kube-api-access-qqg4w\") pod \"goldmane-54d579b49d-846w2\" (UID: \"bb443f83-14a6-4513-8886-5ac39261925c\") " pod="calico-system/goldmane-54d579b49d-846w2" Sep 12 05:49:07.656095 kubelet[2749]: I0912 05:49:07.655848 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41b672d4-ddc5-4ee1-a2ad-72eb60b23a61-calico-apiserver-certs\") pod \"calico-apiserver-74859f4b68-5566g\" (UID: \"41b672d4-ddc5-4ee1-a2ad-72eb60b23a61\") " pod="calico-apiserver/calico-apiserver-74859f4b68-5566g" Sep 12 05:49:07.656095 kubelet[2749]: I0912 05:49:07.655864 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lskbr\" (UniqueName: \"kubernetes.io/projected/b8cba886-abb2-433c-8204-68b288d36ff7-kube-api-access-lskbr\") pod \"calico-apiserver-74859f4b68-5q8ng\" (UID: \"b8cba886-abb2-433c-8204-68b288d36ff7\") " pod="calico-apiserver/calico-apiserver-74859f4b68-5q8ng" Sep 12 05:49:07.724970 containerd[1592]: time="2025-09-12T05:49:07.724924830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 05:49:07.765920 kubelet[2749]: E0912 05:49:07.765674 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:07.766857 containerd[1592]: time="2025-09-12T05:49:07.766814862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-277rx,Uid:d32fafcb-d2ec-415d-8d25-a8f4e903e286,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:07.773472 kubelet[2749]: E0912 05:49:07.773432 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:07.775617 containerd[1592]: time="2025-09-12T05:49:07.775568204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jhdrh,Uid:81a65f0a-bba2-43b0-970d-51e842e79f55,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:07.781550 containerd[1592]: time="2025-09-12T05:49:07.781470767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5q8ng,Uid:b8cba886-abb2-433c-8204-68b288d36ff7,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:49:07.788151 containerd[1592]: time="2025-09-12T05:49:07.788076991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-597dd5dc8b-zs456,Uid:259ac072-0726-43da-9e00-d086d6ab9458,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:07.795699 containerd[1592]: time="2025-09-12T05:49:07.795646769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f4f4f8f-9g6nc,Uid:83106f83-c82f-4921-81e2-efaf85519998,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:07.803025 containerd[1592]: time="2025-09-12T05:49:07.802966791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5566g,Uid:41b672d4-ddc5-4ee1-a2ad-72eb60b23a61,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:49:07.809989 containerd[1592]: time="2025-09-12T05:49:07.809844108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-846w2,Uid:bb443f83-14a6-4513-8886-5ac39261925c,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:07.887771 containerd[1592]: time="2025-09-12T05:49:07.887710530Z" level=error msg="Failed to destroy network for sandbox \"12b93779d0bf97748af466bb104d96f47c9812a474ee8edfcc6588d83f7beb71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.891097 containerd[1592]: time="2025-09-12T05:49:07.891024431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-277rx,Uid:d32fafcb-d2ec-415d-8d25-a8f4e903e286,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b93779d0bf97748af466bb104d96f47c9812a474ee8edfcc6588d83f7beb71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.891482 kubelet[2749]: E0912 05:49:07.891423 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b93779d0bf97748af466bb104d96f47c9812a474ee8edfcc6588d83f7beb71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.891668 kubelet[2749]: E0912 05:49:07.891631 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b93779d0bf97748af466bb104d96f47c9812a474ee8edfcc6588d83f7beb71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-277rx" Sep 12 05:49:07.891715 kubelet[2749]: E0912 05:49:07.891675 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12b93779d0bf97748af466bb104d96f47c9812a474ee8edfcc6588d83f7beb71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-277rx" Sep 12 05:49:07.891793 kubelet[2749]: E0912 05:49:07.891752 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-277rx_kube-system(d32fafcb-d2ec-415d-8d25-a8f4e903e286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-277rx_kube-system(d32fafcb-d2ec-415d-8d25-a8f4e903e286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12b93779d0bf97748af466bb104d96f47c9812a474ee8edfcc6588d83f7beb71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-277rx" podUID="d32fafcb-d2ec-415d-8d25-a8f4e903e286" Sep 12 05:49:07.904498 containerd[1592]: time="2025-09-12T05:49:07.904439477Z" level=error msg="Failed to destroy network for sandbox \"ed52957719a5b98c38e32c05c565f3e349880ed7996ca6ce5496b23029e85da9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.907344 containerd[1592]: time="2025-09-12T05:49:07.907294954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5q8ng,Uid:b8cba886-abb2-433c-8204-68b288d36ff7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed52957719a5b98c38e32c05c565f3e349880ed7996ca6ce5496b23029e85da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.908064 kubelet[2749]: E0912 05:49:07.907758 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed52957719a5b98c38e32c05c565f3e349880ed7996ca6ce5496b23029e85da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.908064 kubelet[2749]: E0912 05:49:07.907842 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed52957719a5b98c38e32c05c565f3e349880ed7996ca6ce5496b23029e85da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74859f4b68-5q8ng" Sep 12 05:49:07.908064 kubelet[2749]: E0912 05:49:07.907864 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed52957719a5b98c38e32c05c565f3e349880ed7996ca6ce5496b23029e85da9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74859f4b68-5q8ng" Sep 12 05:49:07.908193 kubelet[2749]: E0912 05:49:07.907924 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74859f4b68-5q8ng_calico-apiserver(b8cba886-abb2-433c-8204-68b288d36ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74859f4b68-5q8ng_calico-apiserver(b8cba886-abb2-433c-8204-68b288d36ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed52957719a5b98c38e32c05c565f3e349880ed7996ca6ce5496b23029e85da9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74859f4b68-5q8ng" podUID="b8cba886-abb2-433c-8204-68b288d36ff7" Sep 12 05:49:07.912485 containerd[1592]: time="2025-09-12T05:49:07.912390890Z" level=error msg="Failed to destroy network for sandbox \"b48415b7cbb71dc4580be96fc0ffcf10689bfb07842636d1d25b819b6a809e42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.914011 containerd[1592]: time="2025-09-12T05:49:07.913945188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jhdrh,Uid:81a65f0a-bba2-43b0-970d-51e842e79f55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b48415b7cbb71dc4580be96fc0ffcf10689bfb07842636d1d25b819b6a809e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.914255 kubelet[2749]: E0912 05:49:07.914206 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b48415b7cbb71dc4580be96fc0ffcf10689bfb07842636d1d25b819b6a809e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.914349 kubelet[2749]: E0912 05:49:07.914295 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b48415b7cbb71dc4580be96fc0ffcf10689bfb07842636d1d25b819b6a809e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jhdrh" Sep 12 05:49:07.914349 kubelet[2749]: E0912 05:49:07.914324 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b48415b7cbb71dc4580be96fc0ffcf10689bfb07842636d1d25b819b6a809e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jhdrh" Sep 12 05:49:07.914527 kubelet[2749]: E0912 05:49:07.914381 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jhdrh_kube-system(81a65f0a-bba2-43b0-970d-51e842e79f55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jhdrh_kube-system(81a65f0a-bba2-43b0-970d-51e842e79f55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b48415b7cbb71dc4580be96fc0ffcf10689bfb07842636d1d25b819b6a809e42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jhdrh" podUID="81a65f0a-bba2-43b0-970d-51e842e79f55" Sep 12 05:49:07.924510 containerd[1592]: time="2025-09-12T05:49:07.924427958Z" level=error msg="Failed to destroy network for sandbox \"8f93af2d970cfc61fcdf619f649fd87bade0caf925a6e076d54209f4c601b151\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.928495 containerd[1592]: time="2025-09-12T05:49:07.928438496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5566g,Uid:41b672d4-ddc5-4ee1-a2ad-72eb60b23a61,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f93af2d970cfc61fcdf619f649fd87bade0caf925a6e076d54209f4c601b151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.929164 kubelet[2749]: E0912 05:49:07.928924 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f93af2d970cfc61fcdf619f649fd87bade0caf925a6e076d54209f4c601b151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.929164 kubelet[2749]: E0912 05:49:07.929015 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f93af2d970cfc61fcdf619f649fd87bade0caf925a6e076d54209f4c601b151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74859f4b68-5566g" Sep 12 05:49:07.929164 kubelet[2749]: E0912 05:49:07.929036 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f93af2d970cfc61fcdf619f649fd87bade0caf925a6e076d54209f4c601b151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74859f4b68-5566g" Sep 12 05:49:07.929294 kubelet[2749]: E0912 05:49:07.929110 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74859f4b68-5566g_calico-apiserver(41b672d4-ddc5-4ee1-a2ad-72eb60b23a61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74859f4b68-5566g_calico-apiserver(41b672d4-ddc5-4ee1-a2ad-72eb60b23a61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f93af2d970cfc61fcdf619f649fd87bade0caf925a6e076d54209f4c601b151\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74859f4b68-5566g" podUID="41b672d4-ddc5-4ee1-a2ad-72eb60b23a61" Sep 12 05:49:07.931177 containerd[1592]: time="2025-09-12T05:49:07.931126179Z" level=error msg="Failed to destroy network for sandbox \"11132e8e368899cfeba1443d278e8991a0676c62c5c04448cd1edd1855f9b62f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.933804 containerd[1592]: time="2025-09-12T05:49:07.933748270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-846w2,Uid:bb443f83-14a6-4513-8886-5ac39261925c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11132e8e368899cfeba1443d278e8991a0676c62c5c04448cd1edd1855f9b62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.934145 kubelet[2749]: E0912 05:49:07.934070 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11132e8e368899cfeba1443d278e8991a0676c62c5c04448cd1edd1855f9b62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.934145 kubelet[2749]: E0912 05:49:07.934150 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11132e8e368899cfeba1443d278e8991a0676c62c5c04448cd1edd1855f9b62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-846w2" Sep 12 05:49:07.934369 kubelet[2749]: E0912 05:49:07.934173 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11132e8e368899cfeba1443d278e8991a0676c62c5c04448cd1edd1855f9b62f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-846w2" Sep 12 05:49:07.934369 kubelet[2749]: E0912 05:49:07.934230 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-846w2_calico-system(bb443f83-14a6-4513-8886-5ac39261925c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-846w2_calico-system(bb443f83-14a6-4513-8886-5ac39261925c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11132e8e368899cfeba1443d278e8991a0676c62c5c04448cd1edd1855f9b62f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-846w2" podUID="bb443f83-14a6-4513-8886-5ac39261925c" Sep 12 05:49:07.940644 containerd[1592]: time="2025-09-12T05:49:07.940593810Z" level=error msg="Failed to destroy network for sandbox \"6080bd53cdc09edd95e63947b76df3b2e688fee0f448e0bf2aaab9852c58c8a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.942318 containerd[1592]: time="2025-09-12T05:49:07.942255174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f4f4f8f-9g6nc,Uid:83106f83-c82f-4921-81e2-efaf85519998,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6080bd53cdc09edd95e63947b76df3b2e688fee0f448e0bf2aaab9852c58c8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.942463 containerd[1592]: time="2025-09-12T05:49:07.942393115Z" level=error msg="Failed to destroy network for sandbox \"6b786e19a29558259b1c59b703055c5a3c4b0ab52a8c7780a90da8014468685a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.942629 kubelet[2749]: E0912 05:49:07.942565 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6080bd53cdc09edd95e63947b76df3b2e688fee0f448e0bf2aaab9852c58c8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.942672 kubelet[2749]: E0912 05:49:07.942655 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6080bd53cdc09edd95e63947b76df3b2e688fee0f448e0bf2aaab9852c58c8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677f4f4f8f-9g6nc" Sep 12 05:49:07.942697 kubelet[2749]: E0912 05:49:07.942683 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6080bd53cdc09edd95e63947b76df3b2e688fee0f448e0bf2aaab9852c58c8a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677f4f4f8f-9g6nc" Sep 12 05:49:07.942812 kubelet[2749]: E0912 05:49:07.942760 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677f4f4f8f-9g6nc_calico-system(83106f83-c82f-4921-81e2-efaf85519998)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677f4f4f8f-9g6nc_calico-system(83106f83-c82f-4921-81e2-efaf85519998)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6080bd53cdc09edd95e63947b76df3b2e688fee0f448e0bf2aaab9852c58c8a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677f4f4f8f-9g6nc" podUID="83106f83-c82f-4921-81e2-efaf85519998" Sep 12 05:49:07.945594 containerd[1592]: time="2025-09-12T05:49:07.945555450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-597dd5dc8b-zs456,Uid:259ac072-0726-43da-9e00-d086d6ab9458,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b786e19a29558259b1c59b703055c5a3c4b0ab52a8c7780a90da8014468685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.945769 kubelet[2749]: E0912 05:49:07.945740 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b786e19a29558259b1c59b703055c5a3c4b0ab52a8c7780a90da8014468685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 05:49:07.945808 kubelet[2749]: E0912 05:49:07.945783 2749 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b786e19a29558259b1c59b703055c5a3c4b0ab52a8c7780a90da8014468685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-597dd5dc8b-zs456" Sep 12 05:49:07.945808 kubelet[2749]: E0912 05:49:07.945799 2749 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b786e19a29558259b1c59b703055c5a3c4b0ab52a8c7780a90da8014468685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-597dd5dc8b-zs456" Sep 12 05:49:07.945880 kubelet[2749]: E0912 05:49:07.945843 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-597dd5dc8b-zs456_calico-system(259ac072-0726-43da-9e00-d086d6ab9458)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-597dd5dc8b-zs456_calico-system(259ac072-0726-43da-9e00-d086d6ab9458)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b786e19a29558259b1c59b703055c5a3c4b0ab52a8c7780a90da8014468685a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-597dd5dc8b-zs456" podUID="259ac072-0726-43da-9e00-d086d6ab9458" Sep 12 05:49:16.048991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514373035.mount: Deactivated successfully. Sep 12 05:49:16.702545 containerd[1592]: time="2025-09-12T05:49:16.702432221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:16.703401 containerd[1592]: time="2025-09-12T05:49:16.703323706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 05:49:16.704840 containerd[1592]: time="2025-09-12T05:49:16.704808936Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:16.706999 containerd[1592]: time="2025-09-12T05:49:16.706966034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:16.707743 containerd[1592]: time="2025-09-12T05:49:16.707683528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.982703195s" Sep 12 05:49:16.707743 containerd[1592]: time="2025-09-12T05:49:16.707738348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 05:49:16.736774 containerd[1592]: time="2025-09-12T05:49:16.736711197Z" level=info msg="CreateContainer within sandbox \"6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 05:49:16.752081 containerd[1592]: time="2025-09-12T05:49:16.752020373Z" level=info msg="Container 6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:16.777472 containerd[1592]: time="2025-09-12T05:49:16.777403691Z" level=info msg="CreateContainer within sandbox \"6cc7a9e26c13180174fc601b6b701378b901466bd684f7b201292a83ab5dc934\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db\"" Sep 12 05:49:16.778283 containerd[1592]: time="2025-09-12T05:49:16.778210890Z" level=info msg="StartContainer for \"6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db\"" Sep 12 05:49:16.779814 containerd[1592]: time="2025-09-12T05:49:16.779775325Z" level=info msg="connecting to shim 6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db" address="unix:///run/containerd/s/3b29ca39f308f572e169b9ac08d2aa003973b13f021f47897ab8a8b80aac79b7" protocol=ttrpc version=3 Sep 12 05:49:16.811710 systemd[1]: Started cri-containerd-6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db.scope - libcontainer container 6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db. Sep 12 05:49:16.867867 containerd[1592]: time="2025-09-12T05:49:16.867819966Z" level=info msg="StartContainer for \"6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db\" returns successfully" Sep 12 05:49:16.959946 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 05:49:16.960922 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 05:49:17.413298 kubelet[2749]: I0912 05:49:17.413233 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/259ac072-0726-43da-9e00-d086d6ab9458-whisker-backend-key-pair\") pod \"259ac072-0726-43da-9e00-d086d6ab9458\" (UID: \"259ac072-0726-43da-9e00-d086d6ab9458\") " Sep 12 05:49:17.413298 kubelet[2749]: I0912 05:49:17.413283 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27ljd\" (UniqueName: \"kubernetes.io/projected/259ac072-0726-43da-9e00-d086d6ab9458-kube-api-access-27ljd\") pod \"259ac072-0726-43da-9e00-d086d6ab9458\" (UID: \"259ac072-0726-43da-9e00-d086d6ab9458\") " Sep 12 05:49:17.413298 kubelet[2749]: I0912 05:49:17.413304 2749 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259ac072-0726-43da-9e00-d086d6ab9458-whisker-ca-bundle\") pod \"259ac072-0726-43da-9e00-d086d6ab9458\" (UID: \"259ac072-0726-43da-9e00-d086d6ab9458\") " Sep 12 05:49:17.414244 kubelet[2749]: I0912 05:49:17.413878 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259ac072-0726-43da-9e00-d086d6ab9458-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "259ac072-0726-43da-9e00-d086d6ab9458" (UID: "259ac072-0726-43da-9e00-d086d6ab9458"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 05:49:17.418796 systemd[1]: var-lib-kubelet-pods-259ac072\x2d0726\x2d43da\x2d9e00\x2dd086d6ab9458-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d27ljd.mount: Deactivated successfully. Sep 12 05:49:17.418924 systemd[1]: var-lib-kubelet-pods-259ac072\x2d0726\x2d43da\x2d9e00\x2dd086d6ab9458-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 05:49:17.420305 kubelet[2749]: I0912 05:49:17.420229 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259ac072-0726-43da-9e00-d086d6ab9458-kube-api-access-27ljd" (OuterVolumeSpecName: "kube-api-access-27ljd") pod "259ac072-0726-43da-9e00-d086d6ab9458" (UID: "259ac072-0726-43da-9e00-d086d6ab9458"). InnerVolumeSpecName "kube-api-access-27ljd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 05:49:17.420404 kubelet[2749]: I0912 05:49:17.420344 2749 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259ac072-0726-43da-9e00-d086d6ab9458-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "259ac072-0726-43da-9e00-d086d6ab9458" (UID: "259ac072-0726-43da-9e00-d086d6ab9458"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 05:49:17.514229 kubelet[2749]: I0912 05:49:17.514175 2749 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259ac072-0726-43da-9e00-d086d6ab9458-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 05:49:17.514229 kubelet[2749]: I0912 05:49:17.514228 2749 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/259ac072-0726-43da-9e00-d086d6ab9458-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 05:49:17.514229 kubelet[2749]: I0912 05:49:17.514237 2749 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27ljd\" (UniqueName: \"kubernetes.io/projected/259ac072-0726-43da-9e00-d086d6ab9458-kube-api-access-27ljd\") on node \"localhost\" DevicePath \"\"" Sep 12 05:49:17.763691 systemd[1]: Removed slice kubepods-besteffort-pod259ac072_0726_43da_9e00_d086d6ab9458.slice - libcontainer container kubepods-besteffort-pod259ac072_0726_43da_9e00_d086d6ab9458.slice. Sep 12 05:49:17.864080 kubelet[2749]: I0912 05:49:17.863715 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-959q9" podStartSLOduration=1.6683084099999999 podStartE2EDuration="22.863696581s" podCreationTimestamp="2025-09-12 05:48:55 +0000 UTC" firstStartedPulling="2025-09-12 05:48:55.513252161 +0000 UTC m=+21.104342591" lastFinishedPulling="2025-09-12 05:49:16.708640332 +0000 UTC m=+42.299730762" observedRunningTime="2025-09-12 05:49:17.862021438 +0000 UTC m=+43.453111878" watchObservedRunningTime="2025-09-12 05:49:17.863696581 +0000 UTC m=+43.454787011" Sep 12 05:49:17.886786 containerd[1592]: time="2025-09-12T05:49:17.886719249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db\" id:\"9a55cf4c1d9b48f4e35c2cd0a4bc6540f01a1b8afb23be596b29f8251ffa7382\" pid:3909 exit_status:1 exited_at:{seconds:1757656157 nanos:885958374}" Sep 12 05:49:17.946241 systemd[1]: Created slice kubepods-besteffort-pod87d63d34_1cf1_4783_8302_989b44815768.slice - libcontainer container kubepods-besteffort-pod87d63d34_1cf1_4783_8302_989b44815768.slice. Sep 12 05:49:18.018560 kubelet[2749]: I0912 05:49:18.018379 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d63d34-1cf1-4783-8302-989b44815768-whisker-ca-bundle\") pod \"whisker-79f64f8d9c-b85wz\" (UID: \"87d63d34-1cf1-4783-8302-989b44815768\") " pod="calico-system/whisker-79f64f8d9c-b85wz" Sep 12 05:49:18.018560 kubelet[2749]: I0912 05:49:18.018430 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87d63d34-1cf1-4783-8302-989b44815768-whisker-backend-key-pair\") pod \"whisker-79f64f8d9c-b85wz\" (UID: \"87d63d34-1cf1-4783-8302-989b44815768\") " pod="calico-system/whisker-79f64f8d9c-b85wz" Sep 12 05:49:18.018560 kubelet[2749]: I0912 05:49:18.018450 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpgk9\" (UniqueName: \"kubernetes.io/projected/87d63d34-1cf1-4783-8302-989b44815768-kube-api-access-qpgk9\") pod \"whisker-79f64f8d9c-b85wz\" (UID: \"87d63d34-1cf1-4783-8302-989b44815768\") " pod="calico-system/whisker-79f64f8d9c-b85wz" Sep 12 05:49:18.250453 containerd[1592]: time="2025-09-12T05:49:18.250380850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f64f8d9c-b85wz,Uid:87d63d34-1cf1-4783-8302-989b44815768,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:18.514651 kubelet[2749]: E0912 05:49:18.514562 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:18.515617 containerd[1592]: time="2025-09-12T05:49:18.515484607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-277rx,Uid:d32fafcb-d2ec-415d-8d25-a8f4e903e286,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:18.517211 kubelet[2749]: I0912 05:49:18.517187 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259ac072-0726-43da-9e00-d086d6ab9458" path="/var/lib/kubelet/pods/259ac072-0726-43da-9e00-d086d6ab9458/volumes" Sep 12 05:49:18.784491 systemd[1]: Started sshd@7-10.0.0.17:22-10.0.0.1:45874.service - OpenSSH per-connection server daemon (10.0.0.1:45874). Sep 12 05:49:18.855295 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 45874 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:18.857201 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:18.868041 systemd-logind[1577]: New session 8 of user core. Sep 12 05:49:18.878203 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 05:49:18.883208 containerd[1592]: time="2025-09-12T05:49:18.883142881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db\" id:\"c7d05dab92a9d4c432ca579be83d9b8340f92aeca047aa7ad9318a0cb3dbf823\" pid:3986 exit_status:1 exited_at:{seconds:1757656158 nanos:881161289}" Sep 12 05:49:18.939865 systemd-networkd[1489]: cali4678b927345: Link UP Sep 12 05:49:18.940534 systemd-networkd[1489]: cali4678b927345: Gained carrier Sep 12 05:49:18.954811 containerd[1592]: 2025-09-12 05:49:18.763 [INFO][3953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 05:49:18.954811 containerd[1592]: 2025-09-12 05:49:18.823 [INFO][3953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--277rx-eth0 coredns-674b8bbfcf- kube-system d32fafcb-d2ec-415d-8d25-a8f4e903e286 885 0 2025-09-12 05:48:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-277rx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4678b927345 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-" Sep 12 05:49:18.954811 containerd[1592]: 2025-09-12 05:49:18.823 [INFO][3953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" Sep 12 05:49:18.954811 containerd[1592]: 2025-09-12 05:49:18.886 [INFO][4003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" HandleID="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Workload="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.887 [INFO][4003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" HandleID="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Workload="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ac180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-277rx", "timestamp":"2025-09-12 05:49:18.886246156 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.887 [INFO][4003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.888 [INFO][4003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.888 [INFO][4003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.895 [INFO][4003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" host="localhost" Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.901 [INFO][4003] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.906 [INFO][4003] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.908 [INFO][4003] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.910 [INFO][4003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:18.955421 containerd[1592]: 2025-09-12 05:49:18.910 [INFO][4003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" host="localhost" Sep 12 05:49:18.955798 containerd[1592]: 2025-09-12 05:49:18.911 [INFO][4003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec Sep 12 05:49:18.955798 containerd[1592]: 2025-09-12 05:49:18.915 [INFO][4003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" host="localhost" Sep 12 05:49:18.955798 containerd[1592]: 2025-09-12 05:49:18.925 [INFO][4003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" host="localhost" Sep 12 05:49:18.955798 containerd[1592]: 2025-09-12 05:49:18.925 [INFO][4003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" host="localhost" Sep 12 05:49:18.955798 containerd[1592]: 2025-09-12 05:49:18.925 [INFO][4003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:18.955798 containerd[1592]: 2025-09-12 05:49:18.925 [INFO][4003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" HandleID="k8s-pod-network.12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Workload="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" Sep 12 05:49:18.956116 containerd[1592]: 2025-09-12 05:49:18.929 [INFO][3953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--277rx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d32fafcb-d2ec-415d-8d25-a8f4e903e286", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-277rx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4678b927345", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:18.956217 containerd[1592]: 2025-09-12 05:49:18.929 [INFO][3953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" Sep 12 05:49:18.956217 containerd[1592]: 2025-09-12 05:49:18.929 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4678b927345 ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" Sep 12 05:49:18.956217 containerd[1592]: 2025-09-12 05:49:18.940 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" Sep 12 05:49:18.956311 containerd[1592]: 2025-09-12 05:49:18.940 [INFO][3953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--277rx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d32fafcb-d2ec-415d-8d25-a8f4e903e286", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec", Pod:"coredns-674b8bbfcf-277rx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4678b927345", MAC:"46:7e:40:f2:ba:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:18.956311 containerd[1592]: 2025-09-12 05:49:18.950 [INFO][3953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-277rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--277rx-eth0" Sep 12 05:49:19.057931 systemd-networkd[1489]: calibfc48ac8be9: Link UP Sep 12 05:49:19.065363 systemd-networkd[1489]: calibfc48ac8be9: Gained carrier Sep 12 05:49:19.069788 sshd[4011]: Connection closed by 10.0.0.1 port 45874 Sep 12 05:49:19.073082 sshd-session[3992]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:19.079512 systemd[1]: sshd@7-10.0.0.17:22-10.0.0.1:45874.service: Deactivated successfully. Sep 12 05:49:19.086410 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Sep 12 05:49:19.088132 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 05:49:19.094793 systemd-logind[1577]: Removed session 8. Sep 12 05:49:19.107368 containerd[1592]: time="2025-09-12T05:49:19.106920630Z" level=info msg="connecting to shim 12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec" address="unix:///run/containerd/s/22bf722fdf27556ec3b5bb409b670e7c1557035723c3d804226570eafbb2c7cb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.372 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.443 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79f64f8d9c--b85wz-eth0 whisker-79f64f8d9c- calico-system 87d63d34-1cf1-4783-8302-989b44815768 964 0 2025-09-12 05:49:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79f64f8d9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79f64f8d9c-b85wz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibfc48ac8be9 [] [] }} ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.443 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.887 [INFO][3948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" HandleID="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Workload="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.888 [INFO][3948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" HandleID="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Workload="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cca10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79f64f8d9c-b85wz", "timestamp":"2025-09-12 05:49:18.887213174 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.888 [INFO][3948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.925 [INFO][3948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.926 [INFO][3948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:18.996 [INFO][3948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.003 [INFO][3948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.008 [INFO][3948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.011 [INFO][3948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.013 [INFO][3948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.013 [INFO][3948] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.016 [INFO][3948] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5 Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.021 [INFO][3948] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.028 [INFO][3948] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.028 [INFO][3948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" host="localhost" Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.028 [INFO][3948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:19.119723 containerd[1592]: 2025-09-12 05:49:19.028 [INFO][3948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" HandleID="k8s-pod-network.3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Workload="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" Sep 12 05:49:19.120367 containerd[1592]: 2025-09-12 05:49:19.040 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79f64f8d9c--b85wz-eth0", GenerateName:"whisker-79f64f8d9c-", Namespace:"calico-system", SelfLink:"", UID:"87d63d34-1cf1-4783-8302-989b44815768", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f64f8d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79f64f8d9c-b85wz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibfc48ac8be9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:19.120367 containerd[1592]: 2025-09-12 05:49:19.040 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" Sep 12 05:49:19.120367 containerd[1592]: 2025-09-12 05:49:19.040 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfc48ac8be9 ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" Sep 12 05:49:19.120367 containerd[1592]: 2025-09-12 05:49:19.065 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" Sep 12 05:49:19.120367 containerd[1592]: 2025-09-12 05:49:19.072 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79f64f8d9c--b85wz-eth0", GenerateName:"whisker-79f64f8d9c-", Namespace:"calico-system", SelfLink:"", UID:"87d63d34-1cf1-4783-8302-989b44815768", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79f64f8d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5", Pod:"whisker-79f64f8d9c-b85wz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibfc48ac8be9", MAC:"f6:6b:ac:5a:3c:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:19.120367 containerd[1592]: 2025-09-12 05:49:19.091 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" Namespace="calico-system" Pod="whisker-79f64f8d9c-b85wz" WorkloadEndpoint="localhost-k8s-whisker--79f64f8d9c--b85wz-eth0" Sep 12 05:49:19.161073 systemd[1]: Started cri-containerd-12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec.scope - libcontainer container 12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec. Sep 12 05:49:19.183278 containerd[1592]: time="2025-09-12T05:49:19.182885655Z" level=info msg="connecting to shim 3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5" address="unix:///run/containerd/s/d1674781080388b113f8bfbaba15c02b0dc93fcc02e9a7ca85f4b6611d253755" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:19.186669 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:19.227685 systemd[1]: Started cri-containerd-3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5.scope - libcontainer container 3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5. Sep 12 05:49:19.261331 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:19.265009 containerd[1592]: time="2025-09-12T05:49:19.264919277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-277rx,Uid:d32fafcb-d2ec-415d-8d25-a8f4e903e286,Namespace:kube-system,Attempt:0,} returns sandbox id \"12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec\"" Sep 12 05:49:19.270557 kubelet[2749]: E0912 05:49:19.270504 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:19.276054 containerd[1592]: time="2025-09-12T05:49:19.276023589Z" level=info msg="CreateContainer within sandbox \"12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:49:19.298943 containerd[1592]: time="2025-09-12T05:49:19.298878697Z" level=info msg="Container 20338e43b2f4d83d9adb1767c6a970c5f1ff09d515f0b47fd4acb6daa352a113: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:19.304552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966913731.mount: Deactivated successfully. Sep 12 05:49:19.314630 containerd[1592]: time="2025-09-12T05:49:19.314245614Z" level=info msg="CreateContainer within sandbox \"12080eaf3cd7cfe1bd8464944223b480458555a83ae16c3729cc7920210776ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20338e43b2f4d83d9adb1767c6a970c5f1ff09d515f0b47fd4acb6daa352a113\"" Sep 12 05:49:19.316198 containerd[1592]: time="2025-09-12T05:49:19.315786414Z" level=info msg="StartContainer for \"20338e43b2f4d83d9adb1767c6a970c5f1ff09d515f0b47fd4acb6daa352a113\"" Sep 12 05:49:19.322648 containerd[1592]: time="2025-09-12T05:49:19.322454920Z" level=info msg="connecting to shim 20338e43b2f4d83d9adb1767c6a970c5f1ff09d515f0b47fd4acb6daa352a113" address="unix:///run/containerd/s/22bf722fdf27556ec3b5bb409b670e7c1557035723c3d804226570eafbb2c7cb" protocol=ttrpc version=3 Sep 12 05:49:19.350830 containerd[1592]: time="2025-09-12T05:49:19.350780118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79f64f8d9c-b85wz,Uid:87d63d34-1cf1-4783-8302-989b44815768,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5\"" Sep 12 05:49:19.357115 containerd[1592]: time="2025-09-12T05:49:19.357053202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 05:49:19.366952 systemd[1]: Started cri-containerd-20338e43b2f4d83d9adb1767c6a970c5f1ff09d515f0b47fd4acb6daa352a113.scope - libcontainer container 20338e43b2f4d83d9adb1767c6a970c5f1ff09d515f0b47fd4acb6daa352a113. Sep 12 05:49:19.409059 containerd[1592]: time="2025-09-12T05:49:19.409019593Z" level=info msg="StartContainer for \"20338e43b2f4d83d9adb1767c6a970c5f1ff09d515f0b47fd4acb6daa352a113\" returns successfully" Sep 12 05:49:19.658692 systemd-networkd[1489]: vxlan.calico: Link UP Sep 12 05:49:19.658701 systemd-networkd[1489]: vxlan.calico: Gained carrier Sep 12 05:49:19.879268 kubelet[2749]: E0912 05:49:19.879191 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:19.911247 kubelet[2749]: I0912 05:49:19.911054 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-277rx" podStartSLOduration=38.911029555 podStartE2EDuration="38.911029555s" podCreationTimestamp="2025-09-12 05:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:49:19.896080841 +0000 UTC m=+45.487171281" watchObservedRunningTime="2025-09-12 05:49:19.911029555 +0000 UTC m=+45.502119985" Sep 12 05:49:20.514885 containerd[1592]: time="2025-09-12T05:49:20.514604918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f4f4f8f-9g6nc,Uid:83106f83-c82f-4921-81e2-efaf85519998,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:20.514885 containerd[1592]: time="2025-09-12T05:49:20.514775363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5q8ng,Uid:b8cba886-abb2-433c-8204-68b288d36ff7,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:49:20.514885 containerd[1592]: time="2025-09-12T05:49:20.514790171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-846w2,Uid:bb443f83-14a6-4513-8886-5ac39261925c,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:20.612717 systemd-networkd[1489]: cali4678b927345: Gained IPv6LL Sep 12 05:49:20.805263 systemd-networkd[1489]: calibfc48ac8be9: Gained IPv6LL Sep 12 05:49:20.883531 kubelet[2749]: E0912 05:49:20.883482 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:21.022085 systemd-networkd[1489]: calid9700d37b88: Link UP Sep 12 05:49:21.023897 systemd-networkd[1489]: calid9700d37b88: Gained carrier Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.816 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0 calico-kube-controllers-677f4f4f8f- calico-system 83106f83-c82f-4921-81e2-efaf85519998 887 0 2025-09-12 05:48:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:677f4f4f8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-677f4f4f8f-9g6nc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid9700d37b88 [] [] }} ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.816 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.885 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" HandleID="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Workload="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.885 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" HandleID="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Workload="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-677f4f4f8f-9g6nc", "timestamp":"2025-09-12 05:49:20.885734079 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.886 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.886 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.886 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.895 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.948 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.952 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.954 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.956 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.956 [INFO][4398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.957 [INFO][4398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4 Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:20.982 [INFO][4398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:21.015 [INFO][4398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:21.015 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" host="localhost" Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:21.015 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:21.038721 containerd[1592]: 2025-09-12 05:49:21.015 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" HandleID="k8s-pod-network.0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Workload="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" Sep 12 05:49:21.039428 containerd[1592]: 2025-09-12 05:49:21.018 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0", GenerateName:"calico-kube-controllers-677f4f4f8f-", Namespace:"calico-system", SelfLink:"", UID:"83106f83-c82f-4921-81e2-efaf85519998", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f4f4f8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-677f4f4f8f-9g6nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9700d37b88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.039428 containerd[1592]: 2025-09-12 05:49:21.018 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" Sep 12 05:49:21.039428 containerd[1592]: 2025-09-12 05:49:21.018 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9700d37b88 ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" Sep 12 05:49:21.039428 containerd[1592]: 2025-09-12 05:49:21.025 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" Sep 12 05:49:21.039428 containerd[1592]: 2025-09-12 05:49:21.026 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0", GenerateName:"calico-kube-controllers-677f4f4f8f-", Namespace:"calico-system", SelfLink:"", UID:"83106f83-c82f-4921-81e2-efaf85519998", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f4f4f8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4", Pod:"calico-kube-controllers-677f4f4f8f-9g6nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid9700d37b88", MAC:"b2:15:12:81:64:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.039428 containerd[1592]: 2025-09-12 05:49:21.034 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" Namespace="calico-system" Pod="calico-kube-controllers-677f4f4f8f-9g6nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--677f4f4f8f--9g6nc-eth0" Sep 12 05:49:21.076294 containerd[1592]: time="2025-09-12T05:49:21.076035176Z" level=info msg="connecting to shim 0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4" address="unix:///run/containerd/s/33cc04e94b23433f7ed2a8cac5184f8d4216ccbe242f003d8cd0630b359a3758" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:21.081455 systemd-networkd[1489]: calicdb417bcaa6: Link UP Sep 12 05:49:21.083842 systemd-networkd[1489]: calicdb417bcaa6: Gained carrier Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:20.811 [INFO][4386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0 calico-apiserver-74859f4b68- calico-apiserver b8cba886-abb2-433c-8204-68b288d36ff7 889 0 2025-09-12 05:48:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74859f4b68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74859f4b68-5q8ng eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicdb417bcaa6 [] [] }} ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:20.812 [INFO][4386] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:20.890 [INFO][4396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" HandleID="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Workload="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:20.891 [INFO][4396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" HandleID="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Workload="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74859f4b68-5q8ng", "timestamp":"2025-09-12 05:49:20.890870699 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:20.891 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.015 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.015 [INFO][4396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.023 [INFO][4396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.050 [INFO][4396] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.055 [INFO][4396] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.058 [INFO][4396] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.061 [INFO][4396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.061 [INFO][4396] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.062 [INFO][4396] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817 Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.067 [INFO][4396] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.071 [INFO][4396] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.072 [INFO][4396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" host="localhost" Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.072 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:21.106786 containerd[1592]: 2025-09-12 05:49:21.072 [INFO][4396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" HandleID="k8s-pod-network.9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Workload="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" Sep 12 05:49:21.107562 containerd[1592]: 2025-09-12 05:49:21.076 [INFO][4386] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0", GenerateName:"calico-apiserver-74859f4b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8cba886-abb2-433c-8204-68b288d36ff7", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74859f4b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74859f4b68-5q8ng", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdb417bcaa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.107562 containerd[1592]: 2025-09-12 05:49:21.077 [INFO][4386] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" Sep 12 05:49:21.107562 containerd[1592]: 2025-09-12 05:49:21.077 [INFO][4386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdb417bcaa6 ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" Sep 12 05:49:21.107562 containerd[1592]: 2025-09-12 05:49:21.084 [INFO][4386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" Sep 12 05:49:21.107562 containerd[1592]: 2025-09-12 05:49:21.087 [INFO][4386] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0", GenerateName:"calico-apiserver-74859f4b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8cba886-abb2-433c-8204-68b288d36ff7", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74859f4b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817", Pod:"calico-apiserver-74859f4b68-5q8ng", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicdb417bcaa6", MAC:"e6:87:07:8b:e3:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.107562 containerd[1592]: 2025-09-12 05:49:21.102 [INFO][4386] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5q8ng" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5q8ng-eth0" Sep 12 05:49:21.116738 systemd[1]: Started cri-containerd-0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4.scope - libcontainer container 0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4. Sep 12 05:49:21.137711 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:21.180533 containerd[1592]: time="2025-09-12T05:49:21.180182296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f4f4f8f-9g6nc,Uid:83106f83-c82f-4921-81e2-efaf85519998,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4\"" Sep 12 05:49:21.185881 containerd[1592]: time="2025-09-12T05:49:21.185801318Z" level=info msg="connecting to shim 9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817" address="unix:///run/containerd/s/79f6281169920762c67f7573832224301d511124f70772d42b112ccea7d42ff9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:21.202555 systemd-networkd[1489]: cali7a8e3a96f15: Link UP Sep 12 05:49:21.206728 systemd-networkd[1489]: cali7a8e3a96f15: Gained carrier Sep 12 05:49:21.215996 systemd[1]: Started cri-containerd-9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817.scope - libcontainer container 9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817. Sep 12 05:49:21.234924 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:20.948 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--846w2-eth0 goldmane-54d579b49d- calico-system bb443f83-14a6-4513-8886-5ac39261925c 892 0 2025-09-12 05:48:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-846w2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7a8e3a96f15 [] [] }} ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:20.948 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-eth0" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.030 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" HandleID="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Workload="localhost-k8s-goldmane--54d579b49d--846w2-eth0" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.031 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" HandleID="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Workload="localhost-k8s-goldmane--54d579b49d--846w2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-846w2", "timestamp":"2025-09-12 05:49:21.030866891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.031 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.072 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.073 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.124 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.155 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.163 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.165 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.167 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.167 [INFO][4427] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.168 [INFO][4427] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7 Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.174 [INFO][4427] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.185 [INFO][4427] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.187 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" host="localhost" Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.187 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:21.247729 containerd[1592]: 2025-09-12 05:49:21.188 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" HandleID="k8s-pod-network.f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Workload="localhost-k8s-goldmane--54d579b49d--846w2-eth0" Sep 12 05:49:21.248964 containerd[1592]: 2025-09-12 05:49:21.198 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--846w2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bb443f83-14a6-4513-8886-5ac39261925c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-846w2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a8e3a96f15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.248964 containerd[1592]: 2025-09-12 05:49:21.198 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-eth0" Sep 12 05:49:21.248964 containerd[1592]: 2025-09-12 05:49:21.198 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a8e3a96f15 ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-eth0" Sep 12 05:49:21.248964 containerd[1592]: 2025-09-12 05:49:21.211 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-eth0" Sep 12 05:49:21.248964 containerd[1592]: 2025-09-12 05:49:21.219 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--846w2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"bb443f83-14a6-4513-8886-5ac39261925c", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7", Pod:"goldmane-54d579b49d-846w2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a8e3a96f15", MAC:"ea:6a:89:9c:08:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.248964 containerd[1592]: 2025-09-12 05:49:21.235 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" Namespace="calico-system" Pod="goldmane-54d579b49d-846w2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--846w2-eth0" Sep 12 05:49:21.276549 containerd[1592]: time="2025-09-12T05:49:21.276165809Z" level=info msg="connecting to shim f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7" address="unix:///run/containerd/s/09a82fcbebe7d1abcc1904484a0569316a894b4b1746fe019368a32abd2bd56b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:21.280549 containerd[1592]: time="2025-09-12T05:49:21.280498076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5q8ng,Uid:b8cba886-abb2-433c-8204-68b288d36ff7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817\"" Sep 12 05:49:21.310668 systemd[1]: Started cri-containerd-f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7.scope - libcontainer container f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7. Sep 12 05:49:21.325656 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:21.355794 containerd[1592]: time="2025-09-12T05:49:21.355673403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-846w2,Uid:bb443f83-14a6-4513-8886-5ac39261925c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7\"" Sep 12 05:49:21.508756 systemd-networkd[1489]: vxlan.calico: Gained IPv6LL Sep 12 05:49:21.514047 containerd[1592]: time="2025-09-12T05:49:21.513990337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5566g,Uid:41b672d4-ddc5-4ee1-a2ad-72eb60b23a61,Namespace:calico-apiserver,Attempt:0,}" Sep 12 05:49:21.605943 systemd-networkd[1489]: cali27e03f4e5dd: Link UP Sep 12 05:49:21.606736 systemd-networkd[1489]: cali27e03f4e5dd: Gained carrier Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.547 [INFO][4602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0 calico-apiserver-74859f4b68- calico-apiserver 41b672d4-ddc5-4ee1-a2ad-72eb60b23a61 891 0 2025-09-12 05:48:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74859f4b68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-74859f4b68-5566g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali27e03f4e5dd [] [] }} ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.547 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.571 [INFO][4616] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" HandleID="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Workload="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.572 [INFO][4616] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" HandleID="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Workload="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-74859f4b68-5566g", "timestamp":"2025-09-12 05:49:21.571911893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.572 [INFO][4616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.572 [INFO][4616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.572 [INFO][4616] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.578 [INFO][4616] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.582 [INFO][4616] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.585 [INFO][4616] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.587 [INFO][4616] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.589 [INFO][4616] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.589 [INFO][4616] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.590 [INFO][4616] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5 Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.596 [INFO][4616] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.600 [INFO][4616] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.600 [INFO][4616] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" host="localhost" Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.600 [INFO][4616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:21.618219 containerd[1592]: 2025-09-12 05:49:21.600 [INFO][4616] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" HandleID="k8s-pod-network.679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Workload="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" Sep 12 05:49:21.619066 containerd[1592]: 2025-09-12 05:49:21.603 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0", GenerateName:"calico-apiserver-74859f4b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"41b672d4-ddc5-4ee1-a2ad-72eb60b23a61", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74859f4b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-74859f4b68-5566g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27e03f4e5dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.619066 containerd[1592]: 2025-09-12 05:49:21.603 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" Sep 12 05:49:21.619066 containerd[1592]: 2025-09-12 05:49:21.603 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27e03f4e5dd ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" Sep 12 05:49:21.619066 containerd[1592]: 2025-09-12 05:49:21.607 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" Sep 12 05:49:21.619066 containerd[1592]: 2025-09-12 05:49:21.607 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0", GenerateName:"calico-apiserver-74859f4b68-", Namespace:"calico-apiserver", SelfLink:"", UID:"41b672d4-ddc5-4ee1-a2ad-72eb60b23a61", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74859f4b68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5", Pod:"calico-apiserver-74859f4b68-5566g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27e03f4e5dd", MAC:"12:ec:17:31:b1:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:21.619066 containerd[1592]: 2025-09-12 05:49:21.615 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" Namespace="calico-apiserver" Pod="calico-apiserver-74859f4b68-5566g" WorkloadEndpoint="localhost-k8s-calico--apiserver--74859f4b68--5566g-eth0" Sep 12 05:49:21.642558 containerd[1592]: time="2025-09-12T05:49:21.642469342Z" level=info msg="connecting to shim 679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5" address="unix:///run/containerd/s/b65ab8a669739e42c8a77cf1c8fd1504045ad126163af4fa1caa0049d2fad4ee" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:21.676685 systemd[1]: Started cri-containerd-679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5.scope - libcontainer container 679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5. Sep 12 05:49:21.690226 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:21.720077 containerd[1592]: time="2025-09-12T05:49:21.720016063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74859f4b68-5566g,Uid:41b672d4-ddc5-4ee1-a2ad-72eb60b23a61,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5\"" Sep 12 05:49:21.900149 kubelet[2749]: E0912 05:49:21.900009 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:22.340729 systemd-networkd[1489]: calid9700d37b88: Gained IPv6LL Sep 12 05:49:22.424420 containerd[1592]: time="2025-09-12T05:49:22.424275826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:22.464451 containerd[1592]: time="2025-09-12T05:49:22.464324691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 05:49:22.514162 kubelet[2749]: E0912 05:49:22.514118 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:22.514833 containerd[1592]: time="2025-09-12T05:49:22.514785283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jhdrh,Uid:81a65f0a-bba2-43b0-970d-51e842e79f55,Namespace:kube-system,Attempt:0,}" Sep 12 05:49:22.515016 containerd[1592]: time="2025-09-12T05:49:22.514953444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tc6l,Uid:87ee9e6e-7669-4a36-a669-9a05a8ff4705,Namespace:calico-system,Attempt:0,}" Sep 12 05:49:22.552045 containerd[1592]: time="2025-09-12T05:49:22.551997418Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:22.795641 containerd[1592]: time="2025-09-12T05:49:22.795551243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:22.796505 containerd[1592]: time="2025-09-12T05:49:22.796455750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.439257539s" Sep 12 05:49:22.796505 containerd[1592]: time="2025-09-12T05:49:22.796498860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 05:49:22.797700 containerd[1592]: time="2025-09-12T05:49:22.797491711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 05:49:22.921974 containerd[1592]: time="2025-09-12T05:49:22.921886042Z" level=info msg="CreateContainer within sandbox \"3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 05:49:23.108835 systemd-networkd[1489]: calicdb417bcaa6: Gained IPv6LL Sep 12 05:49:23.225316 containerd[1592]: time="2025-09-12T05:49:23.225266421Z" level=info msg="Container 6bebc9103d813e8e65b17194e52a573e55a9abf0892d40ed8c2a39bc3cba7aca: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:23.238479 systemd-networkd[1489]: cali7a8e3a96f15: Gained IPv6LL Sep 12 05:49:23.241158 systemd-networkd[1489]: califad7e90f3d5: Link UP Sep 12 05:49:23.241382 systemd-networkd[1489]: califad7e90f3d5: Gained carrier Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.922 [INFO][4683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0 coredns-674b8bbfcf- kube-system 81a65f0a-bba2-43b0-970d-51e842e79f55 888 0 2025-09-12 05:48:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jhdrh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califad7e90f3d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.922 [INFO][4683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.962 [INFO][4710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" HandleID="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Workload="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.962 [INFO][4710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" HandleID="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Workload="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aeaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jhdrh", "timestamp":"2025-09-12 05:49:22.96203386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.962 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.962 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.962 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.970 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.975 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:22.979 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.076 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.079 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.079 [INFO][4710] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.080 [INFO][4710] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63 Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.105 [INFO][4710] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.233 [INFO][4710] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.233 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" host="localhost" Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.233 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:23.304747 containerd[1592]: 2025-09-12 05:49:23.233 [INFO][4710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" HandleID="k8s-pod-network.2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Workload="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" Sep 12 05:49:23.305368 containerd[1592]: 2025-09-12 05:49:23.235 [INFO][4683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"81a65f0a-bba2-43b0-970d-51e842e79f55", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jhdrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad7e90f3d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:23.305368 containerd[1592]: 2025-09-12 05:49:23.235 [INFO][4683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" Sep 12 05:49:23.305368 containerd[1592]: 2025-09-12 05:49:23.235 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califad7e90f3d5 ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" Sep 12 05:49:23.305368 containerd[1592]: 2025-09-12 05:49:23.242 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" Sep 12 05:49:23.305368 containerd[1592]: 2025-09-12 05:49:23.242 [INFO][4683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"81a65f0a-bba2-43b0-970d-51e842e79f55", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63", Pod:"coredns-674b8bbfcf-jhdrh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad7e90f3d5", MAC:"7a:f3:9c:a9:ee:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:23.305368 containerd[1592]: 2025-09-12 05:49:23.301 [INFO][4683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" Namespace="kube-system" Pod="coredns-674b8bbfcf-jhdrh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jhdrh-eth0" Sep 12 05:49:23.456707 containerd[1592]: time="2025-09-12T05:49:23.456557501Z" level=info msg="CreateContainer within sandbox \"3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6bebc9103d813e8e65b17194e52a573e55a9abf0892d40ed8c2a39bc3cba7aca\"" Sep 12 05:49:23.457615 containerd[1592]: time="2025-09-12T05:49:23.457583043Z" level=info msg="StartContainer for \"6bebc9103d813e8e65b17194e52a573e55a9abf0892d40ed8c2a39bc3cba7aca\"" Sep 12 05:49:23.459612 containerd[1592]: time="2025-09-12T05:49:23.459581122Z" level=info msg="connecting to shim 6bebc9103d813e8e65b17194e52a573e55a9abf0892d40ed8c2a39bc3cba7aca" address="unix:///run/containerd/s/d1674781080388b113f8bfbaba15c02b0dc93fcc02e9a7ca85f4b6611d253755" protocol=ttrpc version=3 Sep 12 05:49:23.468070 systemd-networkd[1489]: cali7cfd384a512: Link UP Sep 12 05:49:23.468993 systemd-networkd[1489]: cali7cfd384a512: Gained carrier Sep 12 05:49:23.494697 systemd[1]: Started cri-containerd-6bebc9103d813e8e65b17194e52a573e55a9abf0892d40ed8c2a39bc3cba7aca.scope - libcontainer container 6bebc9103d813e8e65b17194e52a573e55a9abf0892d40ed8c2a39bc3cba7aca. Sep 12 05:49:23.557795 systemd-networkd[1489]: cali27e03f4e5dd: Gained IPv6LL Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:22.955 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6tc6l-eth0 csi-node-driver- calico-system 87ee9e6e-7669-4a36-a669-9a05a8ff4705 772 0 2025-09-12 05:48:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6tc6l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7cfd384a512 [] [] }} ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:22.955 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-eth0" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:22.989 [INFO][4720] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" HandleID="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Workload="localhost-k8s-csi--node--driver--6tc6l-eth0" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:22.989 [INFO][4720] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" HandleID="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Workload="localhost-k8s-csi--node--driver--6tc6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6tc6l", "timestamp":"2025-09-12 05:49:22.989675177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:22.989 [INFO][4720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.233 [INFO][4720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.233 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.239 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.305 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.311 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.312 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.315 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.315 [INFO][4720] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.316 [INFO][4720] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877 Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.388 [INFO][4720] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.456 [INFO][4720] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.456 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" host="localhost" Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.457 [INFO][4720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 05:49:23.558874 containerd[1592]: 2025-09-12 05:49:23.457 [INFO][4720] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" HandleID="k8s-pod-network.e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Workload="localhost-k8s-csi--node--driver--6tc6l-eth0" Sep 12 05:49:23.559379 containerd[1592]: 2025-09-12 05:49:23.461 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6tc6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87ee9e6e-7669-4a36-a669-9a05a8ff4705", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6tc6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7cfd384a512", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:23.559379 containerd[1592]: 2025-09-12 05:49:23.461 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-eth0" Sep 12 05:49:23.559379 containerd[1592]: 2025-09-12 05:49:23.461 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cfd384a512 ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-eth0" Sep 12 05:49:23.559379 containerd[1592]: 2025-09-12 05:49:23.469 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-eth0" Sep 12 05:49:23.559379 containerd[1592]: 2025-09-12 05:49:23.470 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6tc6l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"87ee9e6e-7669-4a36-a669-9a05a8ff4705", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 5, 48, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877", Pod:"csi-node-driver-6tc6l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7cfd384a512", MAC:"e6:92:05:17:00:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 05:49:23.559379 containerd[1592]: 2025-09-12 05:49:23.554 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" Namespace="calico-system" Pod="csi-node-driver-6tc6l" WorkloadEndpoint="localhost-k8s-csi--node--driver--6tc6l-eth0" Sep 12 05:49:23.576269 containerd[1592]: time="2025-09-12T05:49:23.576212050Z" level=info msg="StartContainer for \"6bebc9103d813e8e65b17194e52a573e55a9abf0892d40ed8c2a39bc3cba7aca\" returns successfully" Sep 12 05:49:23.607459 containerd[1592]: time="2025-09-12T05:49:23.607238885Z" level=info msg="connecting to shim 2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63" address="unix:///run/containerd/s/82fa3988eb0ac956df12d141fccda2e372717da0c47243714b491fcf8273d75b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:23.629765 containerd[1592]: time="2025-09-12T05:49:23.629709263Z" level=info msg="connecting to shim e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877" address="unix:///run/containerd/s/40d483ad052e41280394010d8e4bf71a898f48d208ed7db52adbf5609a873691" namespace=k8s.io protocol=ttrpc version=3 Sep 12 05:49:23.646220 systemd[1]: Started cri-containerd-2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63.scope - libcontainer container 2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63. Sep 12 05:49:23.662989 systemd[1]: Started cri-containerd-e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877.scope - libcontainer container e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877. Sep 12 05:49:23.676840 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:23.691181 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 05:49:23.720510 containerd[1592]: time="2025-09-12T05:49:23.720340835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6tc6l,Uid:87ee9e6e-7669-4a36-a669-9a05a8ff4705,Namespace:calico-system,Attempt:0,} returns sandbox id \"e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877\"" Sep 12 05:49:23.721196 containerd[1592]: time="2025-09-12T05:49:23.721161318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jhdrh,Uid:81a65f0a-bba2-43b0-970d-51e842e79f55,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63\"" Sep 12 05:49:23.722382 kubelet[2749]: E0912 05:49:23.721929 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:23.730220 containerd[1592]: time="2025-09-12T05:49:23.730167470Z" level=info msg="CreateContainer within sandbox \"2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 05:49:23.751557 containerd[1592]: time="2025-09-12T05:49:23.751421821Z" level=info msg="Container 86b2e0a39f1286352f1cd05cf3dbeb059aef9f90d4686cb6ffc37b16a9dbda20: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:23.763166 containerd[1592]: time="2025-09-12T05:49:23.763109870Z" level=info msg="CreateContainer within sandbox \"2eee259015ffb55f2faadb7d957dea05bd6da1adf97cc1005e1dbe1ec816ac63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86b2e0a39f1286352f1cd05cf3dbeb059aef9f90d4686cb6ffc37b16a9dbda20\"" Sep 12 05:49:23.764702 containerd[1592]: time="2025-09-12T05:49:23.764004881Z" level=info msg="StartContainer for \"86b2e0a39f1286352f1cd05cf3dbeb059aef9f90d4686cb6ffc37b16a9dbda20\"" Sep 12 05:49:23.765182 containerd[1592]: time="2025-09-12T05:49:23.765157119Z" level=info msg="connecting to shim 86b2e0a39f1286352f1cd05cf3dbeb059aef9f90d4686cb6ffc37b16a9dbda20" address="unix:///run/containerd/s/82fa3988eb0ac956df12d141fccda2e372717da0c47243714b491fcf8273d75b" protocol=ttrpc version=3 Sep 12 05:49:23.790840 systemd[1]: Started cri-containerd-86b2e0a39f1286352f1cd05cf3dbeb059aef9f90d4686cb6ffc37b16a9dbda20.scope - libcontainer container 86b2e0a39f1286352f1cd05cf3dbeb059aef9f90d4686cb6ffc37b16a9dbda20. Sep 12 05:49:23.834685 containerd[1592]: time="2025-09-12T05:49:23.834549328Z" level=info msg="StartContainer for \"86b2e0a39f1286352f1cd05cf3dbeb059aef9f90d4686cb6ffc37b16a9dbda20\" returns successfully" Sep 12 05:49:23.906925 kubelet[2749]: E0912 05:49:23.906653 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:23.918725 kubelet[2749]: I0912 05:49:23.918042 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jhdrh" podStartSLOduration=42.918025351 podStartE2EDuration="42.918025351s" podCreationTimestamp="2025-09-12 05:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 05:49:23.91790118 +0000 UTC m=+49.508991610" watchObservedRunningTime="2025-09-12 05:49:23.918025351 +0000 UTC m=+49.509115781" Sep 12 05:49:24.082677 systemd[1]: Started sshd@8-10.0.0.17:22-10.0.0.1:32784.service - OpenSSH per-connection server daemon (10.0.0.1:32784). Sep 12 05:49:24.150679 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 32784 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:24.152615 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:24.157376 systemd-logind[1577]: New session 9 of user core. Sep 12 05:49:24.170890 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 05:49:24.316466 sshd[4918]: Connection closed by 10.0.0.1 port 32784 Sep 12 05:49:24.317267 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:24.321628 systemd[1]: sshd@8-10.0.0.17:22-10.0.0.1:32784.service: Deactivated successfully. Sep 12 05:49:24.324195 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 05:49:24.325049 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Sep 12 05:49:24.326472 systemd-logind[1577]: Removed session 9. Sep 12 05:49:24.518050 systemd-networkd[1489]: califad7e90f3d5: Gained IPv6LL Sep 12 05:49:24.580691 systemd-networkd[1489]: cali7cfd384a512: Gained IPv6LL Sep 12 05:49:24.909932 kubelet[2749]: E0912 05:49:24.909818 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:25.913073 kubelet[2749]: E0912 05:49:25.913039 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:26.368698 containerd[1592]: time="2025-09-12T05:49:26.368628705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:26.369506 containerd[1592]: time="2025-09-12T05:49:26.369468365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 05:49:26.370853 containerd[1592]: time="2025-09-12T05:49:26.370787879Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:26.372839 containerd[1592]: time="2025-09-12T05:49:26.372805129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:26.373337 containerd[1592]: time="2025-09-12T05:49:26.373290321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.575737657s" Sep 12 05:49:26.373337 containerd[1592]: time="2025-09-12T05:49:26.373321730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 05:49:26.374368 containerd[1592]: time="2025-09-12T05:49:26.374337909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 05:49:26.387832 containerd[1592]: time="2025-09-12T05:49:26.387779248Z" level=info msg="CreateContainer within sandbox \"0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 05:49:26.397904 containerd[1592]: time="2025-09-12T05:49:26.397858677Z" level=info msg="Container 45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:26.407060 containerd[1592]: time="2025-09-12T05:49:26.407016042Z" level=info msg="CreateContainer within sandbox \"0f0d41bbeaad78e3d2cfd2ae79af2ebcfb4c709668e7a93cf05d5f3f465636a4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b\"" Sep 12 05:49:26.407805 containerd[1592]: time="2025-09-12T05:49:26.407527633Z" level=info msg="StartContainer for \"45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b\"" Sep 12 05:49:26.408535 containerd[1592]: time="2025-09-12T05:49:26.408482087Z" level=info msg="connecting to shim 45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b" address="unix:///run/containerd/s/33cc04e94b23433f7ed2a8cac5184f8d4216ccbe242f003d8cd0630b359a3758" protocol=ttrpc version=3 Sep 12 05:49:26.432678 systemd[1]: Started cri-containerd-45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b.scope - libcontainer container 45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b. Sep 12 05:49:26.482791 containerd[1592]: time="2025-09-12T05:49:26.482688377Z" level=info msg="StartContainer for \"45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b\" returns successfully" Sep 12 05:49:26.985666 containerd[1592]: time="2025-09-12T05:49:26.985612790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b\" id:\"cda1dbeda73c99945157533de42fbf8154ebe1c97f1410d7a8b9cff134bd0350\" pid:5003 exited_at:{seconds:1757656166 nanos:984849491}" Sep 12 05:49:26.997361 kubelet[2749]: I0912 05:49:26.997280 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-677f4f4f8f-9g6nc" podStartSLOduration=26.808209782 podStartE2EDuration="31.997258561s" podCreationTimestamp="2025-09-12 05:48:55 +0000 UTC" firstStartedPulling="2025-09-12 05:49:21.185127449 +0000 UTC m=+46.776217879" lastFinishedPulling="2025-09-12 05:49:26.374176228 +0000 UTC m=+51.965266658" observedRunningTime="2025-09-12 05:49:26.950654287 +0000 UTC m=+52.541744727" watchObservedRunningTime="2025-09-12 05:49:26.997258561 +0000 UTC m=+52.588348991" Sep 12 05:49:29.329047 systemd[1]: Started sshd@9-10.0.0.17:22-10.0.0.1:32788.service - OpenSSH per-connection server daemon (10.0.0.1:32788). Sep 12 05:49:29.412120 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 32788 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:29.414645 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:29.421030 systemd-logind[1577]: New session 10 of user core. Sep 12 05:49:29.426719 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 05:49:29.571398 sshd[5027]: Connection closed by 10.0.0.1 port 32788 Sep 12 05:49:29.572446 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:29.577969 systemd[1]: sshd@9-10.0.0.17:22-10.0.0.1:32788.service: Deactivated successfully. Sep 12 05:49:29.581058 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 05:49:29.582187 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Sep 12 05:49:29.583985 systemd-logind[1577]: Removed session 10. Sep 12 05:49:30.117336 containerd[1592]: time="2025-09-12T05:49:30.117269531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:30.118031 containerd[1592]: time="2025-09-12T05:49:30.117977329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 05:49:30.119551 containerd[1592]: time="2025-09-12T05:49:30.119418524Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:30.122488 containerd[1592]: time="2025-09-12T05:49:30.122432619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:30.127777 containerd[1592]: time="2025-09-12T05:49:30.127716792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.753347355s" Sep 12 05:49:30.127777 containerd[1592]: time="2025-09-12T05:49:30.127751417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 05:49:30.128870 containerd[1592]: time="2025-09-12T05:49:30.128802495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 05:49:30.134350 containerd[1592]: time="2025-09-12T05:49:30.134307239Z" level=info msg="CreateContainer within sandbox \"9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 05:49:30.144487 containerd[1592]: time="2025-09-12T05:49:30.144423364Z" level=info msg="Container aeb3bebad5aef7b01f71a84fd3bcea9678454d9b5d6583953d8a18d063272ffe: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:30.154852 containerd[1592]: time="2025-09-12T05:49:30.154792670Z" level=info msg="CreateContainer within sandbox \"9e7bd9bad04a560199ecfab5965f6366f841d492ce7bba0c0579d669a08d1817\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aeb3bebad5aef7b01f71a84fd3bcea9678454d9b5d6583953d8a18d063272ffe\"" Sep 12 05:49:30.155438 containerd[1592]: time="2025-09-12T05:49:30.155392417Z" level=info msg="StartContainer for \"aeb3bebad5aef7b01f71a84fd3bcea9678454d9b5d6583953d8a18d063272ffe\"" Sep 12 05:49:30.157078 containerd[1592]: time="2025-09-12T05:49:30.156993369Z" level=info msg="connecting to shim aeb3bebad5aef7b01f71a84fd3bcea9678454d9b5d6583953d8a18d063272ffe" address="unix:///run/containerd/s/79f6281169920762c67f7573832224301d511124f70772d42b112ccea7d42ff9" protocol=ttrpc version=3 Sep 12 05:49:30.244701 systemd[1]: Started cri-containerd-aeb3bebad5aef7b01f71a84fd3bcea9678454d9b5d6583953d8a18d063272ffe.scope - libcontainer container aeb3bebad5aef7b01f71a84fd3bcea9678454d9b5d6583953d8a18d063272ffe. Sep 12 05:49:30.295563 containerd[1592]: time="2025-09-12T05:49:30.295494232Z" level=info msg="StartContainer for \"aeb3bebad5aef7b01f71a84fd3bcea9678454d9b5d6583953d8a18d063272ffe\" returns successfully" Sep 12 05:49:30.954383 kubelet[2749]: I0912 05:49:30.954307 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74859f4b68-5q8ng" podStartSLOduration=30.1079955 podStartE2EDuration="38.954291005s" podCreationTimestamp="2025-09-12 05:48:52 +0000 UTC" firstStartedPulling="2025-09-12 05:49:21.282387848 +0000 UTC m=+46.873478268" lastFinishedPulling="2025-09-12 05:49:30.128683343 +0000 UTC m=+55.719773773" observedRunningTime="2025-09-12 05:49:30.953652325 +0000 UTC m=+56.544742755" watchObservedRunningTime="2025-09-12 05:49:30.954291005 +0000 UTC m=+56.545381425" Sep 12 05:49:33.937071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243087210.mount: Deactivated successfully. Sep 12 05:49:34.588687 systemd[1]: Started sshd@10-10.0.0.17:22-10.0.0.1:46686.service - OpenSSH per-connection server daemon (10.0.0.1:46686). Sep 12 05:49:34.910921 containerd[1592]: time="2025-09-12T05:49:34.910862033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:34.914616 containerd[1592]: time="2025-09-12T05:49:34.913275255Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:34.918205 containerd[1592]: time="2025-09-12T05:49:34.915967919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.787128195s" Sep 12 05:49:34.918205 containerd[1592]: time="2025-09-12T05:49:34.915998927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 05:49:34.918205 containerd[1592]: time="2025-09-12T05:49:34.916813176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:34.918205 containerd[1592]: time="2025-09-12T05:49:34.917286088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 05:49:34.930272 containerd[1592]: time="2025-09-12T05:49:34.929944345Z" level=info msg="CreateContainer within sandbox \"f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 05:49:34.936625 containerd[1592]: time="2025-09-12T05:49:34.936557031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 05:49:34.962206 containerd[1592]: time="2025-09-12T05:49:34.959489977Z" level=info msg="Container ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:34.980324 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 46686 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:34.984022 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:34.992240 systemd-logind[1577]: New session 11 of user core. Sep 12 05:49:34.999674 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 05:49:35.002958 containerd[1592]: time="2025-09-12T05:49:35.002913502Z" level=info msg="CreateContainer within sandbox \"f5c20eff16d0b63d742a208a42bdc94ba9296787d749eba1136f88b19752d5f7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75\"" Sep 12 05:49:35.004559 containerd[1592]: time="2025-09-12T05:49:35.004262510Z" level=info msg="StartContainer for \"ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75\"" Sep 12 05:49:35.005486 containerd[1592]: time="2025-09-12T05:49:35.005448263Z" level=info msg="connecting to shim ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75" address="unix:///run/containerd/s/09a82fcbebe7d1abcc1904484a0569316a894b4b1746fe019368a32abd2bd56b" protocol=ttrpc version=3 Sep 12 05:49:35.037844 systemd[1]: Started cri-containerd-ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75.scope - libcontainer container ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75. Sep 12 05:49:35.155260 containerd[1592]: time="2025-09-12T05:49:35.155212124Z" level=info msg="StartContainer for \"ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75\" returns successfully" Sep 12 05:49:35.161302 sshd[5112]: Connection closed by 10.0.0.1 port 46686 Sep 12 05:49:35.162457 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:35.174026 systemd[1]: sshd@10-10.0.0.17:22-10.0.0.1:46686.service: Deactivated successfully. Sep 12 05:49:35.176532 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 05:49:35.177492 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Sep 12 05:49:35.181503 systemd[1]: Started sshd@11-10.0.0.17:22-10.0.0.1:46698.service - OpenSSH per-connection server daemon (10.0.0.1:46698). Sep 12 05:49:35.182745 systemd-logind[1577]: Removed session 11. Sep 12 05:49:35.238783 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 46698 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:35.240364 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:35.244999 systemd-logind[1577]: New session 12 of user core. Sep 12 05:49:35.263666 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 05:49:35.432593 sshd[5162]: Connection closed by 10.0.0.1 port 46698 Sep 12 05:49:35.432946 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:35.446281 systemd[1]: sshd@11-10.0.0.17:22-10.0.0.1:46698.service: Deactivated successfully. Sep 12 05:49:35.448662 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 05:49:35.449580 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Sep 12 05:49:35.452725 systemd[1]: Started sshd@12-10.0.0.17:22-10.0.0.1:46706.service - OpenSSH per-connection server daemon (10.0.0.1:46706). Sep 12 05:49:35.453687 systemd-logind[1577]: Removed session 12. Sep 12 05:49:35.516370 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 46706 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:35.518220 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:35.523015 systemd-logind[1577]: New session 13 of user core. Sep 12 05:49:35.530144 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 05:49:35.650677 sshd[5176]: Connection closed by 10.0.0.1 port 46706 Sep 12 05:49:35.651070 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:35.655168 systemd[1]: sshd@12-10.0.0.17:22-10.0.0.1:46706.service: Deactivated successfully. Sep 12 05:49:35.657974 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 05:49:35.660174 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Sep 12 05:49:35.662699 systemd-logind[1577]: Removed session 13. Sep 12 05:49:35.710193 containerd[1592]: time="2025-09-12T05:49:35.710015293Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:35.711546 containerd[1592]: time="2025-09-12T05:49:35.711464367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 05:49:35.713348 containerd[1592]: time="2025-09-12T05:49:35.713286228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 795.976405ms" Sep 12 05:49:35.713348 containerd[1592]: time="2025-09-12T05:49:35.713332464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 05:49:35.714439 containerd[1592]: time="2025-09-12T05:49:35.714401799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 05:49:35.718643 containerd[1592]: time="2025-09-12T05:49:35.718610564Z" level=info msg="CreateContainer within sandbox \"679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 05:49:35.813123 containerd[1592]: time="2025-09-12T05:49:35.812322226Z" level=info msg="Container 7675d814f64b99ee28e3e487b0926112ad9d3a45b8f4df4a71bd05818a9e726a: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:35.823447 containerd[1592]: time="2025-09-12T05:49:35.823393770Z" level=info msg="CreateContainer within sandbox \"679f5a4779d48136457b87ab9afe07f7a2654b3bcaae3d42c13909eff148dbb5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7675d814f64b99ee28e3e487b0926112ad9d3a45b8f4df4a71bd05818a9e726a\"" Sep 12 05:49:35.824045 containerd[1592]: time="2025-09-12T05:49:35.824016672Z" level=info msg="StartContainer for \"7675d814f64b99ee28e3e487b0926112ad9d3a45b8f4df4a71bd05818a9e726a\"" Sep 12 05:49:35.825247 containerd[1592]: time="2025-09-12T05:49:35.825205370Z" level=info msg="connecting to shim 7675d814f64b99ee28e3e487b0926112ad9d3a45b8f4df4a71bd05818a9e726a" address="unix:///run/containerd/s/b65ab8a669739e42c8a77cf1c8fd1504045ad126163af4fa1caa0049d2fad4ee" protocol=ttrpc version=3 Sep 12 05:49:35.851853 systemd[1]: Started cri-containerd-7675d814f64b99ee28e3e487b0926112ad9d3a45b8f4df4a71bd05818a9e726a.scope - libcontainer container 7675d814f64b99ee28e3e487b0926112ad9d3a45b8f4df4a71bd05818a9e726a. Sep 12 05:49:35.901616 containerd[1592]: time="2025-09-12T05:49:35.901484999Z" level=info msg="StartContainer for \"7675d814f64b99ee28e3e487b0926112ad9d3a45b8f4df4a71bd05818a9e726a\" returns successfully" Sep 12 05:49:36.255276 kubelet[2749]: I0912 05:49:36.255086 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74859f4b68-5566g" podStartSLOduration=30.262228108 podStartE2EDuration="44.255064192s" podCreationTimestamp="2025-09-12 05:48:52 +0000 UTC" firstStartedPulling="2025-09-12 05:49:21.721414844 +0000 UTC m=+47.312505274" lastFinishedPulling="2025-09-12 05:49:35.714250928 +0000 UTC m=+61.305341358" observedRunningTime="2025-09-12 05:49:36.09559139 +0000 UTC m=+61.686681830" watchObservedRunningTime="2025-09-12 05:49:36.255064192 +0000 UTC m=+61.846154612" Sep 12 05:49:36.966188 kubelet[2749]: I0912 05:49:36.966135 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:49:37.057845 containerd[1592]: time="2025-09-12T05:49:37.057782144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75\" id:\"d8c8942919a171f9938cb42e8a28d323ee88e75f3c50c14e838a4584fec94301\" pid:5244 exit_status:1 exited_at:{seconds:1757656177 nanos:57289765}" Sep 12 05:49:38.055905 containerd[1592]: time="2025-09-12T05:49:38.055855554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75\" id:\"9f43e4d9b3d3f8850997c5837cbd763bcd151175912386e63dd51940de6f05e5\" pid:5267 exit_status:1 exited_at:{seconds:1757656178 nanos:55447262}" Sep 12 05:49:39.756233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount377409916.mount: Deactivated successfully. Sep 12 05:49:39.778985 containerd[1592]: time="2025-09-12T05:49:39.778909514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:39.779841 containerd[1592]: time="2025-09-12T05:49:39.779811690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 05:49:39.780921 containerd[1592]: time="2025-09-12T05:49:39.780894112Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:39.783183 containerd[1592]: time="2025-09-12T05:49:39.783143053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:39.783930 containerd[1592]: time="2025-09-12T05:49:39.783891763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.069451471s" Sep 12 05:49:39.783974 containerd[1592]: time="2025-09-12T05:49:39.783929002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 05:49:39.784917 containerd[1592]: time="2025-09-12T05:49:39.784872465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 05:49:39.788741 containerd[1592]: time="2025-09-12T05:49:39.788703332Z" level=info msg="CreateContainer within sandbox \"3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 05:49:39.797731 containerd[1592]: time="2025-09-12T05:49:39.797600666Z" level=info msg="Container b37abfa7c407ac3d90fc2a90770f329965fd105bd1b0828e508bdb969b2ffca4: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:39.806736 containerd[1592]: time="2025-09-12T05:49:39.806698253Z" level=info msg="CreateContainer within sandbox \"3f3e33aed3813bf5f7e5d09a8389c9715cb0fe91e251de7e5122bc59e94568d5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b37abfa7c407ac3d90fc2a90770f329965fd105bd1b0828e508bdb969b2ffca4\"" Sep 12 05:49:39.807230 containerd[1592]: time="2025-09-12T05:49:39.807190704Z" level=info msg="StartContainer for \"b37abfa7c407ac3d90fc2a90770f329965fd105bd1b0828e508bdb969b2ffca4\"" Sep 12 05:49:39.808547 containerd[1592]: time="2025-09-12T05:49:39.808495300Z" level=info msg="connecting to shim b37abfa7c407ac3d90fc2a90770f329965fd105bd1b0828e508bdb969b2ffca4" address="unix:///run/containerd/s/d1674781080388b113f8bfbaba15c02b0dc93fcc02e9a7ca85f4b6611d253755" protocol=ttrpc version=3 Sep 12 05:49:39.843047 systemd[1]: Started cri-containerd-b37abfa7c407ac3d90fc2a90770f329965fd105bd1b0828e508bdb969b2ffca4.scope - libcontainer container b37abfa7c407ac3d90fc2a90770f329965fd105bd1b0828e508bdb969b2ffca4. Sep 12 05:49:39.904803 containerd[1592]: time="2025-09-12T05:49:39.904745679Z" level=info msg="StartContainer for \"b37abfa7c407ac3d90fc2a90770f329965fd105bd1b0828e508bdb969b2ffca4\" returns successfully" Sep 12 05:49:39.989385 kubelet[2749]: I0912 05:49:39.989303 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79f64f8d9c-b85wz" podStartSLOduration=2.557446403 podStartE2EDuration="22.989279616s" podCreationTimestamp="2025-09-12 05:49:17 +0000 UTC" firstStartedPulling="2025-09-12 05:49:19.352803812 +0000 UTC m=+44.943894242" lastFinishedPulling="2025-09-12 05:49:39.784637025 +0000 UTC m=+65.375727455" observedRunningTime="2025-09-12 05:49:39.988440749 +0000 UTC m=+65.579531199" watchObservedRunningTime="2025-09-12 05:49:39.989279616 +0000 UTC m=+65.580370046" Sep 12 05:49:39.989964 kubelet[2749]: I0912 05:49:39.989415 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-846w2" podStartSLOduration=32.429758624 podStartE2EDuration="45.989409833s" podCreationTimestamp="2025-09-12 05:48:54 +0000 UTC" firstStartedPulling="2025-09-12 05:49:21.357508013 +0000 UTC m=+46.948598443" lastFinishedPulling="2025-09-12 05:49:34.917159222 +0000 UTC m=+60.508249652" observedRunningTime="2025-09-12 05:49:36.255353101 +0000 UTC m=+61.846443531" watchObservedRunningTime="2025-09-12 05:49:39.989409833 +0000 UTC m=+65.580500273" Sep 12 05:49:40.666125 systemd[1]: Started sshd@13-10.0.0.17:22-10.0.0.1:37322.service - OpenSSH per-connection server daemon (10.0.0.1:37322). Sep 12 05:49:40.745638 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 37322 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:40.747227 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:40.752159 systemd-logind[1577]: New session 14 of user core. Sep 12 05:49:40.763641 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 05:49:40.946264 sshd[5335]: Connection closed by 10.0.0.1 port 37322 Sep 12 05:49:40.946853 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:40.953099 systemd[1]: sshd@13-10.0.0.17:22-10.0.0.1:37322.service: Deactivated successfully. Sep 12 05:49:40.955662 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 05:49:40.956419 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Sep 12 05:49:40.957918 systemd-logind[1577]: Removed session 14. Sep 12 05:49:41.428755 containerd[1592]: time="2025-09-12T05:49:41.428697628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:41.429451 containerd[1592]: time="2025-09-12T05:49:41.429416626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 05:49:41.430565 containerd[1592]: time="2025-09-12T05:49:41.430511295Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:41.432662 containerd[1592]: time="2025-09-12T05:49:41.432629948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:41.433246 containerd[1592]: time="2025-09-12T05:49:41.433210734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.648298402s" Sep 12 05:49:41.433280 containerd[1592]: time="2025-09-12T05:49:41.433243466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 05:49:41.437665 containerd[1592]: time="2025-09-12T05:49:41.437622571Z" level=info msg="CreateContainer within sandbox \"e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 05:49:41.511197 containerd[1592]: time="2025-09-12T05:49:41.511143320Z" level=info msg="Container acfca7865dd2b0a4ea8957288683ab9373239e9dd089035ae71404e599fc7f4a: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:41.545373 containerd[1592]: time="2025-09-12T05:49:41.545318784Z" level=info msg="CreateContainer within sandbox \"e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"acfca7865dd2b0a4ea8957288683ab9373239e9dd089035ae71404e599fc7f4a\"" Sep 12 05:49:41.546079 containerd[1592]: time="2025-09-12T05:49:41.545840322Z" level=info msg="StartContainer for \"acfca7865dd2b0a4ea8957288683ab9373239e9dd089035ae71404e599fc7f4a\"" Sep 12 05:49:41.547874 containerd[1592]: time="2025-09-12T05:49:41.547836648Z" level=info msg="connecting to shim acfca7865dd2b0a4ea8957288683ab9373239e9dd089035ae71404e599fc7f4a" address="unix:///run/containerd/s/40d483ad052e41280394010d8e4bf71a898f48d208ed7db52adbf5609a873691" protocol=ttrpc version=3 Sep 12 05:49:41.583671 systemd[1]: Started cri-containerd-acfca7865dd2b0a4ea8957288683ab9373239e9dd089035ae71404e599fc7f4a.scope - libcontainer container acfca7865dd2b0a4ea8957288683ab9373239e9dd089035ae71404e599fc7f4a. Sep 12 05:49:41.625873 containerd[1592]: time="2025-09-12T05:49:41.625831329Z" level=info msg="StartContainer for \"acfca7865dd2b0a4ea8957288683ab9373239e9dd089035ae71404e599fc7f4a\" returns successfully" Sep 12 05:49:41.628341 containerd[1592]: time="2025-09-12T05:49:41.628068270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 05:49:43.394281 containerd[1592]: time="2025-09-12T05:49:43.394214509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:43.395080 containerd[1592]: time="2025-09-12T05:49:43.394972169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 05:49:43.396042 containerd[1592]: time="2025-09-12T05:49:43.396001286Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:43.397951 containerd[1592]: time="2025-09-12T05:49:43.397915652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 05:49:43.398500 containerd[1592]: time="2025-09-12T05:49:43.398444818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.770282442s" Sep 12 05:49:43.398500 containerd[1592]: time="2025-09-12T05:49:43.398489925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 05:49:43.403374 containerd[1592]: time="2025-09-12T05:49:43.403340988Z" level=info msg="CreateContainer within sandbox \"e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 05:49:43.412957 containerd[1592]: time="2025-09-12T05:49:43.412910106Z" level=info msg="Container 1db30d513437174f3234a10650b3aa87f819b7cdfe5011003db839cb37565c45: CDI devices from CRI Config.CDIDevices: []" Sep 12 05:49:43.423982 containerd[1592]: time="2025-09-12T05:49:43.423940057Z" level=info msg="CreateContainer within sandbox \"e81a9d969baed79a59cad002a9e71202aae8833f576f03043ff771412a354877\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1db30d513437174f3234a10650b3aa87f819b7cdfe5011003db839cb37565c45\"" Sep 12 05:49:43.424456 containerd[1592]: time="2025-09-12T05:49:43.424418103Z" level=info msg="StartContainer for \"1db30d513437174f3234a10650b3aa87f819b7cdfe5011003db839cb37565c45\"" Sep 12 05:49:43.425824 containerd[1592]: time="2025-09-12T05:49:43.425797179Z" level=info msg="connecting to shim 1db30d513437174f3234a10650b3aa87f819b7cdfe5011003db839cb37565c45" address="unix:///run/containerd/s/40d483ad052e41280394010d8e4bf71a898f48d208ed7db52adbf5609a873691" protocol=ttrpc version=3 Sep 12 05:49:43.452829 systemd[1]: Started cri-containerd-1db30d513437174f3234a10650b3aa87f819b7cdfe5011003db839cb37565c45.scope - libcontainer container 1db30d513437174f3234a10650b3aa87f819b7cdfe5011003db839cb37565c45. Sep 12 05:49:43.732708 containerd[1592]: time="2025-09-12T05:49:43.732654345Z" level=info msg="StartContainer for \"1db30d513437174f3234a10650b3aa87f819b7cdfe5011003db839cb37565c45\" returns successfully" Sep 12 05:49:43.879088 kubelet[2749]: I0912 05:49:43.879039 2749 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 05:49:43.880339 kubelet[2749]: I0912 05:49:43.880308 2749 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 05:49:45.964063 systemd[1]: Started sshd@14-10.0.0.17:22-10.0.0.1:37324.service - OpenSSH per-connection server daemon (10.0.0.1:37324). Sep 12 05:49:46.036219 sshd[5427]: Accepted publickey for core from 10.0.0.1 port 37324 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:46.038142 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:46.043111 systemd-logind[1577]: New session 15 of user core. Sep 12 05:49:46.047774 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 05:49:46.179338 sshd[5430]: Connection closed by 10.0.0.1 port 37324 Sep 12 05:49:46.179717 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:46.186230 systemd[1]: sshd@14-10.0.0.17:22-10.0.0.1:37324.service: Deactivated successfully. Sep 12 05:49:46.188694 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 05:49:46.189609 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Sep 12 05:49:46.192083 systemd-logind[1577]: Removed session 15. Sep 12 05:49:48.913701 containerd[1592]: time="2025-09-12T05:49:48.913609189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e4fd58835300cda4dfa3037608373b6ea33562bc9c0f008b37092ca0452d9db\" id:\"0cd3f86c5ea146ebcd4dd609bbc7bcf3de06b2edbec881700dfe97b83c040f21\" pid:5454 exited_at:{seconds:1757656188 nanos:913224881}" Sep 12 05:49:48.937584 kubelet[2749]: I0912 05:49:48.937504 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6tc6l" podStartSLOduration=34.261135971 podStartE2EDuration="53.937486384s" podCreationTimestamp="2025-09-12 05:48:55 +0000 UTC" firstStartedPulling="2025-09-12 05:49:23.723141001 +0000 UTC m=+49.314231431" lastFinishedPulling="2025-09-12 05:49:43.399491414 +0000 UTC m=+68.990581844" observedRunningTime="2025-09-12 05:49:44.08717546 +0000 UTC m=+69.678265910" watchObservedRunningTime="2025-09-12 05:49:48.937486384 +0000 UTC m=+74.528576804" Sep 12 05:49:49.305110 containerd[1592]: time="2025-09-12T05:49:49.305048241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b\" id:\"5ce1ae46c0d7ec7a95b6f551170c6be2279ff220b20fb129c86d1cc0332636b7\" pid:5479 exited_at:{seconds:1757656189 nanos:304469514}" Sep 12 05:49:51.198792 systemd[1]: Started sshd@15-10.0.0.17:22-10.0.0.1:35260.service - OpenSSH per-connection server daemon (10.0.0.1:35260). Sep 12 05:49:51.262709 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 35260 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:51.264581 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:51.269250 systemd-logind[1577]: New session 16 of user core. Sep 12 05:49:51.274666 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 05:49:51.393893 sshd[5493]: Connection closed by 10.0.0.1 port 35260 Sep 12 05:49:51.394386 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:51.399265 systemd[1]: sshd@15-10.0.0.17:22-10.0.0.1:35260.service: Deactivated successfully. Sep 12 05:49:51.401628 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 05:49:51.402541 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Sep 12 05:49:51.403890 systemd-logind[1577]: Removed session 16. Sep 12 05:49:53.331105 kubelet[2749]: I0912 05:49:53.331049 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 05:49:56.408237 systemd[1]: Started sshd@16-10.0.0.17:22-10.0.0.1:35272.service - OpenSSH per-connection server daemon (10.0.0.1:35272). Sep 12 05:49:56.498713 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 35272 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:56.501098 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:56.510985 systemd-logind[1577]: New session 17 of user core. Sep 12 05:49:56.514122 kubelet[2749]: E0912 05:49:56.514083 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:49:56.516670 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 05:49:56.692706 sshd[5512]: Connection closed by 10.0.0.1 port 35272 Sep 12 05:49:56.693019 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:56.706644 systemd[1]: sshd@16-10.0.0.17:22-10.0.0.1:35272.service: Deactivated successfully. Sep 12 05:49:56.708912 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 05:49:56.709686 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Sep 12 05:49:56.712662 systemd[1]: Started sshd@17-10.0.0.17:22-10.0.0.1:35282.service - OpenSSH per-connection server daemon (10.0.0.1:35282). Sep 12 05:49:56.713284 systemd-logind[1577]: Removed session 17. Sep 12 05:49:56.774371 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 35282 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:56.776046 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:56.781015 systemd-logind[1577]: New session 18 of user core. Sep 12 05:49:56.793654 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 05:49:56.971268 containerd[1592]: time="2025-09-12T05:49:56.971180826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45163c23365ccb91515540ea2ed4204255b3030099b3f507e878cfdfb97dbf6b\" id:\"076606fb73cdbb56b5a148ae500a13d47f80907ad34a933a390326ca8fe7ad6b\" pid:5547 exited_at:{seconds:1757656196 nanos:970957081}" Sep 12 05:49:57.153745 sshd[5528]: Connection closed by 10.0.0.1 port 35282 Sep 12 05:49:57.154251 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:57.164733 systemd[1]: sshd@17-10.0.0.17:22-10.0.0.1:35282.service: Deactivated successfully. Sep 12 05:49:57.167037 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 05:49:57.168046 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Sep 12 05:49:57.171343 systemd[1]: Started sshd@18-10.0.0.17:22-10.0.0.1:35288.service - OpenSSH per-connection server daemon (10.0.0.1:35288). Sep 12 05:49:57.172500 systemd-logind[1577]: Removed session 18. Sep 12 05:49:57.249931 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 35288 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:57.251823 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:57.257010 systemd-logind[1577]: New session 19 of user core. Sep 12 05:49:57.265780 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 05:49:57.879801 sshd[5566]: Connection closed by 10.0.0.1 port 35288 Sep 12 05:49:57.881993 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:57.891401 systemd[1]: sshd@18-10.0.0.17:22-10.0.0.1:35288.service: Deactivated successfully. Sep 12 05:49:57.894823 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 05:49:57.897207 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Sep 12 05:49:57.900337 systemd-logind[1577]: Removed session 19. Sep 12 05:49:57.903304 systemd[1]: Started sshd@19-10.0.0.17:22-10.0.0.1:35300.service - OpenSSH per-connection server daemon (10.0.0.1:35300). Sep 12 05:49:57.966191 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 35300 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:57.968006 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:57.973013 systemd-logind[1577]: New session 20 of user core. Sep 12 05:49:57.979661 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 05:49:58.266731 sshd[5595]: Connection closed by 10.0.0.1 port 35300 Sep 12 05:49:58.267357 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:58.280105 systemd[1]: sshd@19-10.0.0.17:22-10.0.0.1:35300.service: Deactivated successfully. Sep 12 05:49:58.283138 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 05:49:58.284683 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Sep 12 05:49:58.288583 systemd[1]: Started sshd@20-10.0.0.17:22-10.0.0.1:35316.service - OpenSSH per-connection server daemon (10.0.0.1:35316). Sep 12 05:49:58.290507 systemd-logind[1577]: Removed session 20. Sep 12 05:49:58.351256 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 35316 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:49:58.353186 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:49:58.358003 systemd-logind[1577]: New session 21 of user core. Sep 12 05:49:58.370794 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 05:49:58.484475 sshd[5609]: Connection closed by 10.0.0.1 port 35316 Sep 12 05:49:58.484868 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Sep 12 05:49:58.490314 systemd[1]: sshd@20-10.0.0.17:22-10.0.0.1:35316.service: Deactivated successfully. Sep 12 05:49:58.492686 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 05:49:58.493605 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Sep 12 05:49:58.495055 systemd-logind[1577]: Removed session 21. Sep 12 05:49:58.514490 kubelet[2749]: E0912 05:49:58.514451 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:02.514649 kubelet[2749]: E0912 05:50:02.514606 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:03.499954 systemd[1]: Started sshd@21-10.0.0.17:22-10.0.0.1:48644.service - OpenSSH per-connection server daemon (10.0.0.1:48644). Sep 12 05:50:03.563770 sshd[5630]: Accepted publickey for core from 10.0.0.1 port 48644 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:03.565986 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:03.570737 systemd-logind[1577]: New session 22 of user core. Sep 12 05:50:03.577659 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 05:50:03.691386 sshd[5633]: Connection closed by 10.0.0.1 port 48644 Sep 12 05:50:03.691818 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:03.696887 systemd[1]: sshd@21-10.0.0.17:22-10.0.0.1:48644.service: Deactivated successfully. Sep 12 05:50:03.699281 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 05:50:03.700163 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Sep 12 05:50:03.701546 systemd-logind[1577]: Removed session 22. Sep 12 05:50:04.514676 kubelet[2749]: E0912 05:50:04.514628 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 05:50:08.065376 containerd[1592]: time="2025-09-12T05:50:08.065312191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75\" id:\"2da4fcd3fa02fac5a2a29265c352ab97c08251e9c8bc020eeed2aee25e57ec64\" pid:5661 exited_at:{seconds:1757656208 nanos:64933985}" Sep 12 05:50:08.705296 systemd[1]: Started sshd@22-10.0.0.17:22-10.0.0.1:48656.service - OpenSSH per-connection server daemon (10.0.0.1:48656). Sep 12 05:50:08.764595 sshd[5676]: Accepted publickey for core from 10.0.0.1 port 48656 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:08.766370 sshd-session[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:08.770942 systemd-logind[1577]: New session 23 of user core. Sep 12 05:50:08.781643 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 05:50:08.907073 sshd[5679]: Connection closed by 10.0.0.1 port 48656 Sep 12 05:50:08.907565 sshd-session[5676]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:08.913163 systemd[1]: sshd@22-10.0.0.17:22-10.0.0.1:48656.service: Deactivated successfully. Sep 12 05:50:08.915655 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 05:50:08.916711 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Sep 12 05:50:08.918218 systemd-logind[1577]: Removed session 23. Sep 12 05:50:09.061593 containerd[1592]: time="2025-09-12T05:50:09.061547430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccb3f3db2eba60ea4f68c7d09a3ae97e90791b582b1ef076cc5a8bed75835d75\" id:\"e134aa8d47c0edb35bc898d588ef23fa30962ad9f8453356aaef473e3fd2454b\" pid:5703 exited_at:{seconds:1757656209 nanos:61190628}" Sep 12 05:50:13.929723 systemd[1]: Started sshd@23-10.0.0.17:22-10.0.0.1:59590.service - OpenSSH per-connection server daemon (10.0.0.1:59590). Sep 12 05:50:14.027085 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 59590 ssh2: RSA SHA256:U1JO+eJG2JU9nuyVYS4dzqqYhW7JLNNCX6TNK3ddyUk Sep 12 05:50:14.029430 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 05:50:14.035754 systemd-logind[1577]: New session 24 of user core. Sep 12 05:50:14.043682 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 05:50:14.315888 sshd[5720]: Connection closed by 10.0.0.1 port 59590 Sep 12 05:50:14.318321 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Sep 12 05:50:14.323639 systemd[1]: sshd@23-10.0.0.17:22-10.0.0.1:59590.service: Deactivated successfully. Sep 12 05:50:14.326477 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 05:50:14.328250 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Sep 12 05:50:14.330173 systemd-logind[1577]: Removed session 24.