Sep 12 00:17:12.867685 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 11 22:16:52 -00 2025 Sep 12 00:17:12.867723 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7794b6bf71a37449b8ef0617d533e34208c88beb959bf84503da9899186bdb34 Sep 12 00:17:12.867739 kernel: BIOS-provided physical RAM map: Sep 12 00:17:12.867749 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 00:17:12.867758 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 00:17:12.867767 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 00:17:12.867778 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 00:17:12.867787 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 00:17:12.867800 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 00:17:12.867813 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 00:17:12.867823 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 12 00:17:12.867831 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 00:17:12.867840 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 00:17:12.867849 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 00:17:12.867861 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 00:17:12.867874 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 00:17:12.867887 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 00:17:12.867897 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 00:17:12.867906 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 00:17:12.867916 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 00:17:12.867925 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 00:17:12.867934 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 00:17:12.867944 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 00:17:12.867953 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 00:17:12.867963 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 00:17:12.867976 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 00:17:12.867985 kernel: NX (Execute Disable) protection: active Sep 12 00:17:12.867994 kernel: APIC: Static calls initialized Sep 12 00:17:12.868003 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 12 00:17:12.868013 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 12 00:17:12.868023 kernel: extended physical RAM map: Sep 12 00:17:12.868033 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 00:17:12.868042 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 00:17:12.868052 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 00:17:12.868071 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 00:17:12.868081 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 00:17:12.868095 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 00:17:12.868104 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 00:17:12.868114 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 12 00:17:12.868123 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 12 00:17:12.868139 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 12 00:17:12.868149 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 12 00:17:12.868161 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 12 00:17:12.868171 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 00:17:12.868181 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 00:17:12.868191 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 00:17:12.868201 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 00:17:12.868211 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 00:17:12.868220 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 12 00:17:12.868230 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 12 00:17:12.868240 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 12 00:17:12.868250 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 12 00:17:12.868262 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 00:17:12.868272 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 00:17:12.868282 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 00:17:12.868291 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 00:17:12.868301 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 00:17:12.868311 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 00:17:12.868324 kernel: efi: EFI v2.7 by EDK II Sep 12 00:17:12.868334 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 12 00:17:12.868344 kernel: random: crng init done Sep 12 00:17:12.868355 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 12 00:17:12.868362 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 12 00:17:12.868375 kernel: secureboot: Secure boot disabled Sep 12 00:17:12.868382 kernel: SMBIOS 2.8 present. Sep 12 00:17:12.868390 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 00:17:12.868397 kernel: DMI: Memory slots populated: 1/1 Sep 12 00:17:12.868404 kernel: Hypervisor detected: KVM Sep 12 00:17:12.868412 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 00:17:12.868419 kernel: kvm-clock: using sched offset of 4889213455 cycles Sep 12 00:17:12.868449 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 00:17:12.868460 kernel: tsc: Detected 2794.748 MHz processor Sep 12 00:17:12.868470 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 00:17:12.868480 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 00:17:12.868494 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 12 00:17:12.868504 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 00:17:12.868515 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 00:17:12.868525 kernel: Using GB pages for direct mapping Sep 12 00:17:12.868535 kernel: ACPI: Early table checksum verification disabled Sep 12 00:17:12.868545 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 00:17:12.868555 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 00:17:12.868566 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:17:12.868576 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:17:12.868590 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 00:17:12.868601 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:17:12.868611 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:17:12.868621 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:17:12.868631 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:17:12.868641 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 00:17:12.868651 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 00:17:12.868661 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 00:17:12.868676 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 00:17:12.868686 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 00:17:12.868696 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 00:17:12.868705 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 00:17:12.868715 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 00:17:12.868725 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 00:17:12.868735 kernel: No NUMA configuration found Sep 12 00:17:12.868745 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 12 00:17:12.868755 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 12 00:17:12.868765 kernel: Zone ranges: Sep 12 00:17:12.868778 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 00:17:12.868788 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 12 00:17:12.868798 kernel: Normal empty Sep 12 00:17:12.868807 kernel: Device empty Sep 12 00:17:12.868817 kernel: Movable zone start for each node Sep 12 00:17:12.868827 kernel: Early memory node ranges Sep 12 00:17:12.868837 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 00:17:12.868847 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 00:17:12.868861 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 00:17:12.868875 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 12 00:17:12.868885 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 12 00:17:12.868894 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 12 00:17:12.868904 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 12 00:17:12.868914 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 12 00:17:12.868924 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 12 00:17:12.868934 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 00:17:12.868948 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 00:17:12.868970 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 00:17:12.868980 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 00:17:12.868990 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 12 00:17:12.869001 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 12 00:17:12.869014 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 00:17:12.869024 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 00:17:12.869035 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 12 00:17:12.869045 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 00:17:12.869056 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 00:17:12.869078 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 00:17:12.869089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 00:17:12.869100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 00:17:12.869110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 00:17:12.869121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 00:17:12.869131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 00:17:12.869142 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 00:17:12.869153 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 00:17:12.869163 kernel: TSC deadline timer available Sep 12 00:17:12.869178 kernel: CPU topo: Max. logical packages: 1 Sep 12 00:17:12.869188 kernel: CPU topo: Max. logical dies: 1 Sep 12 00:17:12.869199 kernel: CPU topo: Max. dies per package: 1 Sep 12 00:17:12.869209 kernel: CPU topo: Max. threads per core: 1 Sep 12 00:17:12.869219 kernel: CPU topo: Num. cores per package: 4 Sep 12 00:17:12.869230 kernel: CPU topo: Num. threads per package: 4 Sep 12 00:17:12.869240 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 00:17:12.869251 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 00:17:12.869262 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 00:17:12.869272 kernel: kvm-guest: setup PV sched yield Sep 12 00:17:12.869287 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 00:17:12.869298 kernel: Booting paravirtualized kernel on KVM Sep 12 00:17:12.869309 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 00:17:12.869319 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 00:17:12.869330 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 00:17:12.869340 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 00:17:12.869351 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 00:17:12.869361 kernel: kvm-guest: PV spinlocks enabled Sep 12 00:17:12.869375 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 00:17:12.869387 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7794b6bf71a37449b8ef0617d533e34208c88beb959bf84503da9899186bdb34 Sep 12 00:17:12.869402 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 00:17:12.869413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 00:17:12.869442 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 00:17:12.869460 kernel: Fallback order for Node 0: 0 Sep 12 00:17:12.869470 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 12 00:17:12.869480 kernel: Policy zone: DMA32 Sep 12 00:17:12.869491 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 00:17:12.869506 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 00:17:12.869517 kernel: ftrace: allocating 40123 entries in 157 pages Sep 12 00:17:12.869528 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 00:17:12.869538 kernel: Dynamic Preempt: voluntary Sep 12 00:17:12.869549 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 00:17:12.869561 kernel: rcu: RCU event tracing is enabled. Sep 12 00:17:12.869572 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 00:17:12.869583 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 00:17:12.869594 kernel: Rude variant of Tasks RCU enabled. Sep 12 00:17:12.869609 kernel: Tracing variant of Tasks RCU enabled. Sep 12 00:17:12.869620 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 00:17:12.869636 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 00:17:12.869647 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:17:12.869659 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:17:12.869669 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:17:12.869680 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 00:17:12.869691 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 00:17:12.869701 kernel: Console: colour dummy device 80x25 Sep 12 00:17:12.869716 kernel: printk: legacy console [ttyS0] enabled Sep 12 00:17:12.869727 kernel: ACPI: Core revision 20240827 Sep 12 00:17:12.869737 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 00:17:12.869748 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 00:17:12.869758 kernel: x2apic enabled Sep 12 00:17:12.869769 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 00:17:12.869779 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 00:17:12.869790 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 00:17:12.869800 kernel: kvm-guest: setup PV IPIs Sep 12 00:17:12.869813 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 00:17:12.869824 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 00:17:12.869835 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 00:17:12.869846 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 00:17:12.869856 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 00:17:12.869867 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 00:17:12.869877 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 00:17:12.869888 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 00:17:12.869899 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 00:17:12.869913 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 00:17:12.869923 kernel: active return thunk: retbleed_return_thunk Sep 12 00:17:12.869934 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 00:17:12.869948 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 00:17:12.869967 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 00:17:12.869978 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 00:17:12.869990 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 00:17:12.870001 kernel: active return thunk: srso_return_thunk Sep 12 00:17:12.870016 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 00:17:12.870027 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 00:17:12.870037 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 00:17:12.870048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 00:17:12.870068 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 00:17:12.870080 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 00:17:12.870090 kernel: Freeing SMP alternatives memory: 32K Sep 12 00:17:12.870101 kernel: pid_max: default: 32768 minimum: 301 Sep 12 00:17:12.870113 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 00:17:12.870127 kernel: landlock: Up and running. Sep 12 00:17:12.870138 kernel: SELinux: Initializing. Sep 12 00:17:12.870149 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 00:17:12.870160 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 00:17:12.870171 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 00:17:12.870182 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 00:17:12.870193 kernel: ... version: 0 Sep 12 00:17:12.870204 kernel: ... bit width: 48 Sep 12 00:17:12.870215 kernel: ... generic registers: 6 Sep 12 00:17:12.870229 kernel: ... value mask: 0000ffffffffffff Sep 12 00:17:12.870240 kernel: ... max period: 00007fffffffffff Sep 12 00:17:12.870251 kernel: ... fixed-purpose events: 0 Sep 12 00:17:12.870261 kernel: ... event mask: 000000000000003f Sep 12 00:17:12.870272 kernel: signal: max sigframe size: 1776 Sep 12 00:17:12.870282 kernel: rcu: Hierarchical SRCU implementation. Sep 12 00:17:12.870294 kernel: rcu: Max phase no-delay instances is 400. Sep 12 00:17:12.870310 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 00:17:12.870320 kernel: smp: Bringing up secondary CPUs ... Sep 12 00:17:12.870335 kernel: smpboot: x86: Booting SMP configuration: Sep 12 00:17:12.870346 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 00:17:12.870356 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 00:17:12.870367 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 00:17:12.870379 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54048K init, 2916K bss, 137196K reserved, 0K cma-reserved) Sep 12 00:17:12.870390 kernel: devtmpfs: initialized Sep 12 00:17:12.870401 kernel: x86/mm: Memory block size: 128MB Sep 12 00:17:12.870412 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 00:17:12.870423 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 00:17:12.870466 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 12 00:17:12.870477 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 00:17:12.870489 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 12 00:17:12.870499 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 00:17:12.870510 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 00:17:12.870521 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 00:17:12.870532 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 00:17:12.870543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 00:17:12.870554 kernel: audit: initializing netlink subsys (disabled) Sep 12 00:17:12.870568 kernel: audit: type=2000 audit(1757636229.593:1): state=initialized audit_enabled=0 res=1 Sep 12 00:17:12.870579 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 00:17:12.870589 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 00:17:12.870600 kernel: cpuidle: using governor menu Sep 12 00:17:12.870611 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 00:17:12.870622 kernel: dca service started, version 1.12.1 Sep 12 00:17:12.870633 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 00:17:12.870644 kernel: PCI: Using configuration type 1 for base access Sep 12 00:17:12.870655 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 00:17:12.870671 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 00:17:12.870682 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 00:17:12.870693 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 00:17:12.870703 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 00:17:12.870714 kernel: ACPI: Added _OSI(Module Device) Sep 12 00:17:12.870725 kernel: ACPI: Added _OSI(Processor Device) Sep 12 00:17:12.870735 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 00:17:12.870746 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 00:17:12.870756 kernel: ACPI: Interpreter enabled Sep 12 00:17:12.870770 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 00:17:12.870780 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 00:17:12.870791 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 00:17:12.870802 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 00:17:12.870812 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 00:17:12.870823 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 00:17:12.871082 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 00:17:12.871249 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 00:17:12.871417 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 00:17:12.871464 kernel: PCI host bridge to bus 0000:00 Sep 12 00:17:12.871669 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 00:17:12.871838 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 00:17:12.871991 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 00:17:12.872157 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 00:17:12.872316 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 00:17:12.872503 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 00:17:12.872658 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 00:17:12.872860 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 00:17:12.873042 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 00:17:12.873219 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 00:17:12.873376 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 00:17:12.873567 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 00:17:12.873723 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 00:17:12.873909 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 00:17:12.874078 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 00:17:12.874238 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 00:17:12.874398 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 00:17:12.874597 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 00:17:12.874775 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 00:17:12.874949 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 00:17:12.875153 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 00:17:12.875345 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 00:17:12.875619 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 00:17:12.875794 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 00:17:12.875966 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 00:17:12.876152 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 00:17:12.876374 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 00:17:12.876567 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 00:17:12.876756 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 00:17:12.876926 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 00:17:12.877130 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 00:17:12.877318 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 00:17:12.877505 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 00:17:12.877521 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 00:17:12.877532 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 00:17:12.877543 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 00:17:12.877554 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 00:17:12.877564 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 00:17:12.877575 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 00:17:12.877591 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 00:17:12.877601 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 00:17:12.877612 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 00:17:12.877623 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 00:17:12.877634 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 00:17:12.877644 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 00:17:12.877655 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 00:17:12.877666 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 00:17:12.877677 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 00:17:12.877690 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 00:17:12.877701 kernel: iommu: Default domain type: Translated Sep 12 00:17:12.877712 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 00:17:12.877722 kernel: efivars: Registered efivars operations Sep 12 00:17:12.877733 kernel: PCI: Using ACPI for IRQ routing Sep 12 00:17:12.877744 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 00:17:12.877755 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 00:17:12.877764 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 12 00:17:12.877775 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 12 00:17:12.877790 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 12 00:17:12.877800 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 12 00:17:12.877812 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 12 00:17:12.877823 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 12 00:17:12.877834 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 12 00:17:12.878004 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 00:17:12.878224 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 00:17:12.878387 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 00:17:12.878409 kernel: vgaarb: loaded Sep 12 00:17:12.878420 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 00:17:12.878452 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 00:17:12.878464 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 00:17:12.878475 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 00:17:12.878486 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 00:17:12.878497 kernel: pnp: PnP ACPI init Sep 12 00:17:12.878713 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 00:17:12.878739 kernel: pnp: PnP ACPI: found 6 devices Sep 12 00:17:12.878752 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 00:17:12.878763 kernel: NET: Registered PF_INET protocol family Sep 12 00:17:12.878775 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 00:17:12.878787 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 00:17:12.878798 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 00:17:12.878810 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 00:17:12.878822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 00:17:12.878837 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 00:17:12.878852 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 00:17:12.878864 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 00:17:12.878876 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 00:17:12.878887 kernel: NET: Registered PF_XDP protocol family Sep 12 00:17:12.879052 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 00:17:12.879227 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 00:17:12.879377 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 00:17:12.879543 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 00:17:12.879733 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 00:17:12.879895 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 00:17:12.880071 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 00:17:12.880214 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 00:17:12.880231 kernel: PCI: CLS 0 bytes, default 64 Sep 12 00:17:12.880243 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 00:17:12.880255 kernel: Initialise system trusted keyrings Sep 12 00:17:12.880272 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 00:17:12.880284 kernel: Key type asymmetric registered Sep 12 00:17:12.880296 kernel: Asymmetric key parser 'x509' registered Sep 12 00:17:12.880308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 00:17:12.880320 kernel: io scheduler mq-deadline registered Sep 12 00:17:12.880331 kernel: io scheduler kyber registered Sep 12 00:17:12.880343 kernel: io scheduler bfq registered Sep 12 00:17:12.880358 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 00:17:12.880370 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 00:17:12.880382 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 00:17:12.880393 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 00:17:12.880405 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 00:17:12.880417 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 00:17:12.880449 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 00:17:12.880462 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 00:17:12.880473 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 00:17:12.880489 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 00:17:12.880667 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 00:17:12.880831 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 00:17:12.881035 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T00:17:12 UTC (1757636232) Sep 12 00:17:12.881215 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 00:17:12.881230 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 00:17:12.881241 kernel: efifb: probing for efifb Sep 12 00:17:12.881250 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 00:17:12.881264 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 00:17:12.881272 kernel: efifb: scrolling: redraw Sep 12 00:17:12.881280 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 00:17:12.881289 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 00:17:12.881298 kernel: fb0: EFI VGA frame buffer device Sep 12 00:17:12.881306 kernel: pstore: Using crash dump compression: deflate Sep 12 00:17:12.881315 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 00:17:12.881323 kernel: NET: Registered PF_INET6 protocol family Sep 12 00:17:12.881332 kernel: Segment Routing with IPv6 Sep 12 00:17:12.881343 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 00:17:12.881352 kernel: NET: Registered PF_PACKET protocol family Sep 12 00:17:12.881360 kernel: Key type dns_resolver registered Sep 12 00:17:12.881368 kernel: IPI shorthand broadcast: enabled Sep 12 00:17:12.881377 kernel: sched_clock: Marking stable (3834003838, 171256088)->(4025729548, -20469622) Sep 12 00:17:12.881386 kernel: registered taskstats version 1 Sep 12 00:17:12.881394 kernel: Loading compiled-in X.509 certificates Sep 12 00:17:12.881403 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 7f0ac4b747edc7786b3c2c5a8c3072fe759c894b' Sep 12 00:17:12.881411 kernel: Demotion targets for Node 0: null Sep 12 00:17:12.881421 kernel: Key type .fscrypt registered Sep 12 00:17:12.881465 kernel: Key type fscrypt-provisioning registered Sep 12 00:17:12.881478 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 00:17:12.881490 kernel: ima: Allocated hash algorithm: sha1 Sep 12 00:17:12.881502 kernel: ima: No architecture policies found Sep 12 00:17:12.881513 kernel: clk: Disabling unused clocks Sep 12 00:17:12.881524 kernel: Warning: unable to open an initial console. Sep 12 00:17:12.881536 kernel: Freeing unused kernel image (initmem) memory: 54048K Sep 12 00:17:12.881547 kernel: Write protecting the kernel read-only data: 24576k Sep 12 00:17:12.881563 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 00:17:12.881575 kernel: Run /init as init process Sep 12 00:17:12.881586 kernel: with arguments: Sep 12 00:17:12.881597 kernel: /init Sep 12 00:17:12.881609 kernel: with environment: Sep 12 00:17:12.881620 kernel: HOME=/ Sep 12 00:17:12.881632 kernel: TERM=linux Sep 12 00:17:12.881643 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 00:17:12.881656 systemd[1]: Successfully made /usr/ read-only. Sep 12 00:17:12.881675 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 00:17:12.881688 systemd[1]: Detected virtualization kvm. Sep 12 00:17:12.881699 systemd[1]: Detected architecture x86-64. Sep 12 00:17:12.881711 systemd[1]: Running in initrd. Sep 12 00:17:12.881723 systemd[1]: No hostname configured, using default hostname. Sep 12 00:17:12.881736 systemd[1]: Hostname set to . Sep 12 00:17:12.881748 systemd[1]: Initializing machine ID from VM UUID. Sep 12 00:17:12.881764 systemd[1]: Queued start job for default target initrd.target. Sep 12 00:17:12.881777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:17:12.881789 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:17:12.881803 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 00:17:12.881816 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 00:17:12.881828 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 00:17:12.881841 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 00:17:12.881860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 00:17:12.881873 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 00:17:12.881885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:17:12.881897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:17:12.881909 systemd[1]: Reached target paths.target - Path Units. Sep 12 00:17:12.881922 systemd[1]: Reached target slices.target - Slice Units. Sep 12 00:17:12.881934 systemd[1]: Reached target swap.target - Swaps. Sep 12 00:17:12.881947 systemd[1]: Reached target timers.target - Timer Units. Sep 12 00:17:12.881963 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 00:17:12.881976 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 00:17:12.881988 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 00:17:12.882001 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 00:17:12.882013 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:17:12.882026 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 00:17:12.882038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:17:12.882053 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 00:17:12.882077 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 00:17:12.882094 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 00:17:12.882106 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 00:17:12.882120 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 00:17:12.882132 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 00:17:12.882145 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 00:17:12.882156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 00:17:12.882168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:17:12.882180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 00:17:12.882198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:17:12.882210 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 00:17:12.882223 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 00:17:12.882267 systemd-journald[220]: Collecting audit messages is disabled. Sep 12 00:17:12.882305 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 00:17:12.882319 systemd-journald[220]: Journal started Sep 12 00:17:12.882348 systemd-journald[220]: Runtime Journal (/run/log/journal/5f5e4f88934d4c20acd93393cfc8a15a) is 6M, max 48.4M, 42.4M free. Sep 12 00:17:12.865120 systemd-modules-load[222]: Inserted module 'overlay' Sep 12 00:17:12.886450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 00:17:12.886481 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 00:17:12.895473 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 00:17:12.897296 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 12 00:17:12.898245 kernel: Bridge firewalling registered Sep 12 00:17:12.900696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:17:12.901818 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 00:17:12.908563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 00:17:12.911624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 00:17:12.915704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 00:17:12.938753 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:17:12.947861 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 00:17:12.950190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:17:12.953556 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:17:12.955907 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 00:17:12.959348 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 00:17:12.961788 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 00:17:12.996700 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7794b6bf71a37449b8ef0617d533e34208c88beb959bf84503da9899186bdb34 Sep 12 00:17:13.020214 systemd-resolved[261]: Positive Trust Anchors: Sep 12 00:17:13.020241 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 00:17:13.020270 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 00:17:13.024546 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 12 00:17:13.044738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 00:17:13.046422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:17:13.155480 kernel: SCSI subsystem initialized Sep 12 00:17:13.165461 kernel: Loading iSCSI transport class v2.0-870. Sep 12 00:17:13.176457 kernel: iscsi: registered transport (tcp) Sep 12 00:17:13.201489 kernel: iscsi: registered transport (qla4xxx) Sep 12 00:17:13.201565 kernel: QLogic iSCSI HBA Driver Sep 12 00:17:13.229773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 00:17:13.260083 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:17:13.262297 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 00:17:13.324978 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 00:17:13.326916 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 00:17:13.389469 kernel: raid6: avx2x4 gen() 29869 MB/s Sep 12 00:17:13.406473 kernel: raid6: avx2x2 gen() 30312 MB/s Sep 12 00:17:13.423488 kernel: raid6: avx2x1 gen() 25693 MB/s Sep 12 00:17:13.423517 kernel: raid6: using algorithm avx2x2 gen() 30312 MB/s Sep 12 00:17:13.441535 kernel: raid6: .... xor() 19649 MB/s, rmw enabled Sep 12 00:17:13.441561 kernel: raid6: using avx2x2 recovery algorithm Sep 12 00:17:13.462456 kernel: xor: automatically using best checksumming function avx Sep 12 00:17:13.628470 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 00:17:13.636698 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 00:17:13.639016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:17:13.674005 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 00:17:13.680686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:17:13.682123 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 00:17:13.704741 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 12 00:17:13.734032 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 00:17:13.736375 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 00:17:13.817717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:17:13.822640 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 00:17:13.869474 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 00:17:13.898147 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 00:17:13.898194 kernel: AES CTR mode by8 optimization enabled Sep 12 00:17:13.903458 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 00:17:13.913483 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 00:17:13.919900 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 00:17:13.919955 kernel: GPT:9289727 != 19775487 Sep 12 00:17:13.919970 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 00:17:13.919985 kernel: GPT:9289727 != 19775487 Sep 12 00:17:13.920823 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 00:17:13.920858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:17:13.926969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:17:13.927223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:17:13.931083 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:17:13.936400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:17:13.937082 kernel: libata version 3.00 loaded. Sep 12 00:17:13.940057 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:17:13.953454 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 00:17:13.954492 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 00:17:13.956740 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 00:17:13.956948 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 00:17:13.957141 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 00:17:13.961731 kernel: scsi host0: ahci Sep 12 00:17:13.961971 kernel: scsi host1: ahci Sep 12 00:17:13.962184 kernel: scsi host2: ahci Sep 12 00:17:13.962366 kernel: scsi host3: ahci Sep 12 00:17:13.962581 kernel: scsi host4: ahci Sep 12 00:17:13.963471 kernel: scsi host5: ahci Sep 12 00:17:13.964614 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 00:17:13.964642 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 00:17:13.965521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:17:13.971222 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 00:17:13.971245 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 00:17:13.971260 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 00:17:13.971275 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 00:17:13.965704 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:17:13.997474 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 00:17:14.006522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 00:17:14.013680 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 00:17:14.013948 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 00:17:14.025546 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 00:17:14.028335 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 00:17:14.031485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:17:14.049098 disk-uuid[632]: Primary Header is updated. Sep 12 00:17:14.049098 disk-uuid[632]: Secondary Entries is updated. Sep 12 00:17:14.049098 disk-uuid[632]: Secondary Header is updated. Sep 12 00:17:14.052463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:17:14.057457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:17:14.058828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:17:14.284489 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 00:17:14.284580 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 00:17:14.285484 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 00:17:14.285580 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 00:17:14.286766 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 00:17:14.286791 kernel: ata3.00: applying bridge limits Sep 12 00:17:14.287464 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 00:17:14.288476 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 00:17:14.289457 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 00:17:14.289472 kernel: ata3.00: configured for UDMA/100 Sep 12 00:17:14.290454 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 00:17:14.291793 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 00:17:14.334980 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 00:17:14.335264 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 00:17:14.353463 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 00:17:14.707317 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 00:17:14.709208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 00:17:14.710772 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:17:14.711991 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 00:17:14.715253 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 00:17:14.753061 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 00:17:15.059506 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:17:15.059580 disk-uuid[635]: The operation has completed successfully. Sep 12 00:17:15.091010 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 00:17:15.091160 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 00:17:15.130987 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 00:17:15.157487 sh[666]: Success Sep 12 00:17:15.175842 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 00:17:15.175931 kernel: device-mapper: uevent: version 1.0.3 Sep 12 00:17:15.175951 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 00:17:15.186481 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 00:17:15.219051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 00:17:15.222916 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 00:17:15.244422 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 00:17:15.249738 kernel: BTRFS: device fsid ec8d3ca5-0acc-4472-a648-2b3bd2a05eb0 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (678) Sep 12 00:17:15.249768 kernel: BTRFS info (device dm-0): first mount of filesystem ec8d3ca5-0acc-4472-a648-2b3bd2a05eb0 Sep 12 00:17:15.249779 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:17:15.254983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 00:17:15.255024 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 00:17:15.256234 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 00:17:15.257296 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 00:17:15.258155 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 00:17:15.259070 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 00:17:15.261564 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 00:17:15.288452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 12 00:17:15.288506 kernel: BTRFS info (device vda6): first mount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:17:15.290271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:17:15.292896 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:17:15.292924 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:17:15.297466 kernel: BTRFS info (device vda6): last unmount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:17:15.298196 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 00:17:15.301564 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 00:17:15.396113 ignition[753]: Ignition 2.21.0 Sep 12 00:17:15.396126 ignition[753]: Stage: fetch-offline Sep 12 00:17:15.396154 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:17:15.396164 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:17:15.398831 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 00:17:15.396251 ignition[753]: parsed url from cmdline: "" Sep 12 00:17:15.401470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 00:17:15.396257 ignition[753]: no config URL provided Sep 12 00:17:15.396264 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 00:17:15.396277 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 12 00:17:15.396302 ignition[753]: op(1): [started] loading QEMU firmware config module Sep 12 00:17:15.396307 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 00:17:15.414971 ignition[753]: op(1): [finished] loading QEMU firmware config module Sep 12 00:17:15.444616 systemd-networkd[857]: lo: Link UP Sep 12 00:17:15.444624 systemd-networkd[857]: lo: Gained carrier Sep 12 00:17:15.446252 systemd-networkd[857]: Enumeration completed Sep 12 00:17:15.446475 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 00:17:15.447286 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:17:15.447290 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 00:17:15.447704 systemd-networkd[857]: eth0: Link UP Sep 12 00:17:15.448673 systemd-networkd[857]: eth0: Gained carrier Sep 12 00:17:15.448682 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:17:15.452664 systemd[1]: Reached target network.target - Network. Sep 12 00:17:15.464353 ignition[753]: parsing config with SHA512: 699d9cc07c4f015fe5d79711d61591bea68fb6b1176238165373d485fac67f1fe378fb3ecbe652aa127f7e7cc68b6ec57d4f9f7f9e5bd29b201dd493cb5089b9 Sep 12 00:17:15.465485 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 00:17:15.468251 unknown[753]: fetched base config from "system" Sep 12 00:17:15.468261 unknown[753]: fetched user config from "qemu" Sep 12 00:17:15.468577 ignition[753]: fetch-offline: fetch-offline passed Sep 12 00:17:15.468631 ignition[753]: Ignition finished successfully Sep 12 00:17:15.473901 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 00:17:15.474606 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 00:17:15.476248 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 00:17:15.526675 ignition[862]: Ignition 2.21.0 Sep 12 00:17:15.526689 ignition[862]: Stage: kargs Sep 12 00:17:15.527056 ignition[862]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:17:15.527073 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:17:15.529860 ignition[862]: kargs: kargs passed Sep 12 00:17:15.529924 ignition[862]: Ignition finished successfully Sep 12 00:17:15.535757 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 00:17:15.538076 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 00:17:15.572184 ignition[870]: Ignition 2.21.0 Sep 12 00:17:15.572199 ignition[870]: Stage: disks Sep 12 00:17:15.572661 ignition[870]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:17:15.572678 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:17:15.574383 ignition[870]: disks: disks passed Sep 12 00:17:15.574538 ignition[870]: Ignition finished successfully Sep 12 00:17:15.577883 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 00:17:15.580347 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 00:17:15.581595 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 00:17:15.583842 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 00:17:15.586045 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 00:17:15.588019 systemd[1]: Reached target basic.target - Basic System. Sep 12 00:17:15.591211 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 00:17:15.631944 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 00:17:15.639442 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 00:17:15.643589 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 00:17:15.753454 kernel: EXT4-fs (vda9): mounted filesystem 2b0516a2-9b75-4ad7-aa6a-616021c6ba5f r/w with ordered data mode. Quota mode: none. Sep 12 00:17:15.753871 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 00:17:15.755223 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 00:17:15.757561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 00:17:15.759180 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 00:17:15.760381 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 00:17:15.760421 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 00:17:15.760459 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 00:17:15.779925 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 00:17:15.783828 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 00:17:15.788913 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Sep 12 00:17:15.788935 kernel: BTRFS info (device vda6): first mount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:17:15.788946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:17:15.792586 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:17:15.792619 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:17:15.795679 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 00:17:15.823279 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 00:17:15.829161 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Sep 12 00:17:15.833490 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 00:17:15.838287 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 00:17:15.928144 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 00:17:15.931368 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 00:17:15.932731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 00:17:15.954559 kernel: BTRFS info (device vda6): last unmount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:17:15.965619 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 00:17:15.981456 ignition[1002]: INFO : Ignition 2.21.0 Sep 12 00:17:15.983381 ignition[1002]: INFO : Stage: mount Sep 12 00:17:15.983381 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:17:15.983381 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:17:15.986182 ignition[1002]: INFO : mount: mount passed Sep 12 00:17:15.986182 ignition[1002]: INFO : Ignition finished successfully Sep 12 00:17:15.987992 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 00:17:15.989715 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 00:17:16.248958 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 00:17:16.250686 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 00:17:16.284452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Sep 12 00:17:16.286487 kernel: BTRFS info (device vda6): first mount of filesystem dd800f66-810a-4e8b-aa6f-9840817fe6b0 Sep 12 00:17:16.286510 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:17:16.289461 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:17:16.289490 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:17:16.290878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 00:17:16.326987 ignition[1031]: INFO : Ignition 2.21.0 Sep 12 00:17:16.328527 ignition[1031]: INFO : Stage: files Sep 12 00:17:16.329323 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:17:16.329323 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:17:16.331594 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 12 00:17:16.332698 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 00:17:16.332698 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 00:17:16.337504 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 00:17:16.338945 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 00:17:16.340602 unknown[1031]: wrote ssh authorized keys file for user: core Sep 12 00:17:16.341719 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 00:17:16.343357 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 00:17:16.345301 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 00:17:16.811625 systemd-networkd[857]: eth0: Gained IPv6LL Sep 12 00:17:16.894372 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 00:17:17.159916 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 00:17:17.159916 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 00:17:17.163554 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 00:17:17.165196 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 00:17:17.166894 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 00:17:17.168556 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 00:17:17.170272 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 00:17:17.171926 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 00:17:17.173635 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 00:17:17.178933 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 00:17:17.181253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 00:17:17.181253 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:17:17.185668 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:17:17.185668 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:17:17.185668 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 00:17:17.486952 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 00:17:17.876969 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:17:17.876969 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 00:17:17.880776 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 00:17:17.886333 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 00:17:17.886333 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 00:17:17.886333 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 00:17:17.890732 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 00:17:17.890732 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 00:17:17.890732 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 00:17:17.890732 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 00:17:17.914618 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 00:17:17.919773 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 00:17:17.921615 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 00:17:17.921615 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 00:17:17.921615 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 00:17:17.921615 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 00:17:17.921615 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 00:17:17.921615 ignition[1031]: INFO : files: files passed Sep 12 00:17:17.921615 ignition[1031]: INFO : Ignition finished successfully Sep 12 00:17:17.926202 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 00:17:17.928970 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 00:17:17.931617 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 00:17:17.949713 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 00:17:17.949886 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 00:17:17.953998 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 00:17:17.955738 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:17:17.955738 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:17:17.959705 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:17:17.961697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 00:17:17.962898 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 00:17:17.965662 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 00:17:18.031191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 00:17:18.031385 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 00:17:18.032928 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 00:17:18.035182 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 00:17:18.035725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 00:17:18.038559 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 00:17:18.061771 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 00:17:18.066043 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 00:17:18.090536 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:17:18.092796 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:17:18.093158 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 00:17:18.093481 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 00:17:18.093598 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 00:17:18.098480 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 00:17:18.098961 systemd[1]: Stopped target basic.target - Basic System. Sep 12 00:17:18.099273 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 00:17:18.099753 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 00:17:18.100084 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 00:17:18.100400 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 00:17:18.100882 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 00:17:18.101210 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 00:17:18.101698 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 00:17:18.102033 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 00:17:18.102345 systemd[1]: Stopped target swap.target - Swaps. Sep 12 00:17:18.102656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 00:17:18.102792 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 00:17:18.121234 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:17:18.121892 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:17:18.122179 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 00:17:18.126804 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:17:18.127444 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 00:17:18.127561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 00:17:18.128243 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 00:17:18.128380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 00:17:18.133293 systemd[1]: Stopped target paths.target - Path Units. Sep 12 00:17:18.135217 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 00:17:18.140535 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:17:18.140892 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 00:17:18.141222 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 00:17:18.141714 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 00:17:18.141806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 00:17:18.146983 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 00:17:18.147082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 00:17:18.148842 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 00:17:18.148978 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 00:17:18.150586 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 00:17:18.150700 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 00:17:18.155233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 00:17:18.158729 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 00:17:18.159255 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 00:17:18.159526 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:17:18.161855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 00:17:18.162040 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 00:17:18.171782 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 00:17:18.171931 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 00:17:18.630773 ignition[1087]: INFO : Ignition 2.21.0 Sep 12 00:17:18.630773 ignition[1087]: INFO : Stage: umount Sep 12 00:17:18.633254 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:17:18.633254 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:17:18.637590 ignition[1087]: INFO : umount: umount passed Sep 12 00:17:18.637590 ignition[1087]: INFO : Ignition finished successfully Sep 12 00:17:18.641130 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 00:17:18.642031 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 00:17:18.642209 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 00:17:18.645009 systemd[1]: Stopped target network.target - Network. Sep 12 00:17:18.645752 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 00:17:18.645816 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 00:17:18.647908 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 00:17:18.647963 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 00:17:18.649918 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 00:17:18.649979 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 00:17:18.652175 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 00:17:18.652252 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 00:17:18.652933 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 00:17:18.657461 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 00:17:18.669263 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 00:17:18.669486 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 00:17:18.674039 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 00:17:18.674670 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 00:17:18.674813 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 00:17:18.682217 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 00:17:18.683137 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 00:17:18.683857 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 00:17:18.683919 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:17:18.689407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 00:17:18.690482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 00:17:18.690558 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 00:17:18.693449 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 00:17:18.693536 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:17:19.566336 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 00:17:19.566496 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 00:17:19.567106 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 00:17:19.567161 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:17:19.571031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:17:19.572954 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 00:17:19.573032 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:17:19.575947 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 00:17:19.582641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 00:17:19.585227 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 00:17:19.585541 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:17:19.590087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 00:17:19.590175 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 00:17:19.591097 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 00:17:19.591147 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:17:19.591451 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 00:17:19.591526 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 00:17:19.592292 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 00:17:19.592349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 00:17:19.599338 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 00:17:19.599610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 00:17:19.601550 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 00:17:19.601627 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 00:17:19.613613 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 00:17:19.615823 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 00:17:19.615932 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:17:19.619659 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 00:17:19.619754 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:17:19.623258 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 00:17:19.623316 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 00:17:19.626991 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 00:17:19.627056 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:17:19.627908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:17:19.627964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:17:19.635458 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 00:17:19.636910 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 00:17:19.636972 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 00:17:19.638486 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:17:19.641559 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 00:17:19.643284 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 00:17:19.653672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 00:17:19.653810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 00:17:19.654985 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 00:17:19.660574 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 00:17:19.693823 systemd[1]: Switching root. Sep 12 00:17:19.742167 systemd-journald[220]: Journal stopped Sep 12 00:17:21.688351 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 12 00:17:21.688423 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 00:17:21.691191 kernel: SELinux: policy capability open_perms=1 Sep 12 00:17:21.691204 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 00:17:21.691223 kernel: SELinux: policy capability always_check_network=0 Sep 12 00:17:21.691235 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 00:17:21.691247 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 00:17:21.691258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 00:17:21.691277 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 00:17:21.691288 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 00:17:21.691305 kernel: audit: type=1403 audit(1757636240.724:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 00:17:21.691318 systemd[1]: Successfully loaded SELinux policy in 63.035ms. Sep 12 00:17:21.691341 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.707ms. Sep 12 00:17:21.691359 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 00:17:21.691372 systemd[1]: Detected virtualization kvm. Sep 12 00:17:21.691384 systemd[1]: Detected architecture x86-64. Sep 12 00:17:21.691396 systemd[1]: Detected first boot. Sep 12 00:17:21.691408 systemd[1]: Initializing machine ID from VM UUID. Sep 12 00:17:21.691440 zram_generator::config[1134]: No configuration found. Sep 12 00:17:21.691454 kernel: Guest personality initialized and is inactive Sep 12 00:17:21.691465 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 00:17:21.691481 kernel: Initialized host personality Sep 12 00:17:21.691494 kernel: NET: Registered PF_VSOCK protocol family Sep 12 00:17:21.691505 systemd[1]: Populated /etc with preset unit settings. Sep 12 00:17:21.691518 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 00:17:21.691535 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 00:17:21.691547 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 00:17:21.691559 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 00:17:21.691571 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 00:17:21.691584 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 00:17:21.691598 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 00:17:21.691610 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 00:17:21.691626 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 00:17:21.691638 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 00:17:21.691653 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 00:17:21.691665 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 00:17:21.691677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:17:21.691690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:17:21.691703 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 00:17:21.691717 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 00:17:21.691730 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 00:17:21.691742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 00:17:21.691754 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 00:17:21.691766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:17:21.691778 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:17:21.691790 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 00:17:21.691805 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 00:17:21.691824 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 00:17:21.691838 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 00:17:21.691850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:17:21.691862 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 00:17:21.691874 systemd[1]: Reached target slices.target - Slice Units. Sep 12 00:17:21.691886 systemd[1]: Reached target swap.target - Swaps. Sep 12 00:17:21.691898 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 00:17:21.691911 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 00:17:21.691927 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 00:17:21.691939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:17:21.691952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 00:17:21.691964 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:17:21.691976 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 00:17:21.691988 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 00:17:21.692005 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 00:17:21.692017 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 00:17:21.692030 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:17:21.692048 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 00:17:21.692060 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 00:17:21.692073 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 00:17:21.692085 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 00:17:21.692097 systemd[1]: Reached target machines.target - Containers. Sep 12 00:17:21.692109 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 00:17:21.692122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:17:21.692136 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 00:17:21.692151 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 00:17:21.692166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:17:21.692178 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 00:17:21.692190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:17:21.692204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 00:17:21.692217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:17:21.692229 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 00:17:21.692241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 00:17:21.692253 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 00:17:21.692268 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 00:17:21.692280 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 00:17:21.692293 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:17:21.692305 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 00:17:21.692317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 00:17:21.692329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 00:17:21.692341 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 00:17:21.692378 systemd-journald[1198]: Collecting audit messages is disabled. Sep 12 00:17:21.692406 kernel: loop: module loaded Sep 12 00:17:21.692418 kernel: fuse: init (API version 7.41) Sep 12 00:17:21.692444 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 00:17:21.692458 systemd-journald[1198]: Journal started Sep 12 00:17:21.692486 systemd-journald[1198]: Runtime Journal (/run/log/journal/5f5e4f88934d4c20acd93393cfc8a15a) is 6M, max 48.4M, 42.4M free. Sep 12 00:17:21.343557 systemd[1]: Queued start job for default target multi-user.target. Sep 12 00:17:21.369668 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 00:17:21.370151 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 00:17:21.695903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 00:17:21.697951 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 00:17:21.697984 systemd[1]: Stopped verity-setup.service. Sep 12 00:17:21.701477 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:17:21.706831 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 00:17:21.707716 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 00:17:21.708906 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 00:17:21.710195 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 00:17:21.711326 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 00:17:21.712570 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 00:17:21.713958 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 00:17:21.715255 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:17:21.716895 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 00:17:21.717126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 00:17:21.718590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:17:21.718810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:17:21.720292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:17:21.720530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:17:21.722093 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 00:17:21.722321 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 00:17:21.723682 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:17:21.723904 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:17:21.725663 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 00:17:21.727828 kernel: ACPI: bus type drm_connector registered Sep 12 00:17:21.727849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:17:21.729624 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 00:17:21.729848 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 00:17:21.731639 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 00:17:21.733288 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 00:17:21.748140 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 00:17:21.750667 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 00:17:21.753142 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 00:17:21.754467 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 00:17:21.754499 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 00:17:21.756592 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 00:17:21.760550 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 00:17:21.761743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:17:21.762995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 00:17:21.765418 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 00:17:21.768004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 00:17:21.770631 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 00:17:21.772957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 00:17:21.774559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 00:17:21.778575 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 00:17:21.780713 systemd-journald[1198]: Time spent on flushing to /var/log/journal/5f5e4f88934d4c20acd93393cfc8a15a is 32.321ms for 1072 entries. Sep 12 00:17:21.780713 systemd-journald[1198]: System Journal (/var/log/journal/5f5e4f88934d4c20acd93393cfc8a15a) is 8M, max 195.6M, 187.6M free. Sep 12 00:17:21.997290 systemd-journald[1198]: Received client request to flush runtime journal. Sep 12 00:17:21.997379 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 00:17:21.997411 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 00:17:21.781758 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 00:17:21.785697 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 00:17:21.837358 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 00:17:21.863831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:17:21.867507 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:17:21.945193 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 12 00:17:21.945213 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 12 00:17:21.951125 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 00:17:21.958671 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 00:17:21.960661 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 00:17:21.963866 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 00:17:22.000704 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 00:17:22.002682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 00:17:22.007141 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 00:17:22.008469 kernel: loop1: detected capacity change from 0 to 224512 Sep 12 00:17:22.011737 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 00:17:22.038476 kernel: loop2: detected capacity change from 0 to 111000 Sep 12 00:17:22.065280 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 00:17:22.068333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 00:17:22.080471 kernel: loop3: detected capacity change from 0 to 128016 Sep 12 00:17:22.203486 kernel: loop4: detected capacity change from 0 to 224512 Sep 12 00:17:22.215822 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 12 00:17:22.215847 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Sep 12 00:17:22.219453 kernel: loop5: detected capacity change from 0 to 111000 Sep 12 00:17:22.224974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:17:22.230012 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 00:17:22.230625 (sd-merge)[1276]: Merged extensions into '/usr'. Sep 12 00:17:22.236210 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 00:17:22.236226 systemd[1]: Reloading... Sep 12 00:17:22.326498 zram_generator::config[1304]: No configuration found. Sep 12 00:17:22.574456 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 00:17:22.587925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 00:17:22.588531 systemd[1]: Reloading finished in 351 ms. Sep 12 00:17:22.617618 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 00:17:22.619146 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 00:17:22.635315 systemd[1]: Starting ensure-sysext.service... Sep 12 00:17:22.637884 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 00:17:22.686144 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Sep 12 00:17:22.686263 systemd[1]: Reloading... Sep 12 00:17:22.693863 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 00:17:22.693932 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 00:17:22.694264 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 00:17:22.694569 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 00:17:22.695513 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 00:17:22.696017 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 12 00:17:22.696123 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 12 00:17:22.700691 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 00:17:22.700705 systemd-tmpfiles[1342]: Skipping /boot Sep 12 00:17:22.712805 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 00:17:22.712910 systemd-tmpfiles[1342]: Skipping /boot Sep 12 00:17:22.771454 zram_generator::config[1375]: No configuration found. Sep 12 00:17:22.977491 systemd[1]: Reloading finished in 290 ms. Sep 12 00:17:22.991795 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 00:17:23.016682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:17:23.026792 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:17:23.030107 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 00:17:23.032985 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 00:17:23.047725 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 00:17:23.052671 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:17:23.058509 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 00:17:23.064238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:17:23.064523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:17:23.066813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:17:23.077814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:17:23.080529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:17:23.081962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:17:23.082112 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:17:23.085786 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 00:17:23.087012 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:17:23.088848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:17:23.092574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:17:23.096057 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 00:17:23.099908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:17:23.100204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:17:23.100586 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Sep 12 00:17:23.114829 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:17:23.115874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:17:23.121396 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 00:17:23.121875 augenrules[1441]: No rules Sep 12 00:17:23.124152 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:17:23.124567 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:17:23.136927 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:17:23.141672 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:17:23.142826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:17:23.145963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:17:23.157262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 00:17:23.160655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:17:23.168784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:17:23.171250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:17:23.171374 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:17:23.176844 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 00:17:23.179551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:17:23.180888 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:17:23.184534 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 00:17:23.187044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:17:23.187334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:17:23.198061 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:17:23.199621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:17:23.202175 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 00:17:23.203698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 00:17:23.206668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:17:23.207038 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:17:23.218641 systemd[1]: Finished ensure-sysext.service. Sep 12 00:17:23.220752 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 00:17:23.238139 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 00:17:23.239411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 00:17:23.239505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 00:17:23.244287 augenrules[1449]: /sbin/augenrules: No change Sep 12 00:17:23.249708 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 00:17:23.251503 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 00:17:23.252037 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 00:17:23.264625 augenrules[1511]: No rules Sep 12 00:17:23.266687 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:17:23.267011 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:17:23.286749 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 00:17:23.344652 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 00:17:23.396498 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 00:17:23.403468 kernel: ACPI: button: Power Button [PWRF] Sep 12 00:17:23.405878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 00:17:23.411554 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 00:17:23.425452 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 00:17:23.425712 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 00:17:23.425887 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 00:17:23.433347 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 00:17:23.445873 systemd-resolved[1411]: Positive Trust Anchors: Sep 12 00:17:23.446258 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 00:17:23.446295 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 00:17:23.451575 systemd-resolved[1411]: Defaulting to hostname 'linux'. Sep 12 00:17:23.455354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 00:17:23.456965 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:17:23.463476 systemd-networkd[1499]: lo: Link UP Sep 12 00:17:23.463487 systemd-networkd[1499]: lo: Gained carrier Sep 12 00:17:23.466803 systemd-networkd[1499]: Enumeration completed Sep 12 00:17:23.466919 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 00:17:23.468164 systemd[1]: Reached target network.target - Network. Sep 12 00:17:23.469486 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:17:23.469497 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 00:17:23.471800 systemd-networkd[1499]: eth0: Link UP Sep 12 00:17:23.471959 systemd-networkd[1499]: eth0: Gained carrier Sep 12 00:17:23.471977 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:17:23.472112 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 00:17:23.475555 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 00:17:23.516513 systemd-networkd[1499]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 00:17:23.521522 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 00:17:23.523034 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 00:17:23.524647 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 00:17:24.173244 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 00:17:24.173315 systemd-timesyncd[1501]: Initial clock synchronization to Fri 2025-09-12 00:17:24.172939 UTC. Sep 12 00:17:24.173655 systemd-resolved[1411]: Clock change detected. Flushing caches. Sep 12 00:17:24.173690 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 00:17:24.174971 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 00:17:24.176620 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 00:17:24.177981 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 00:17:24.178017 systemd[1]: Reached target paths.target - Path Units. Sep 12 00:17:24.178918 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 00:17:24.180125 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 00:17:24.181260 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 00:17:24.182527 systemd[1]: Reached target timers.target - Timer Units. Sep 12 00:17:24.184340 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 00:17:24.194637 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 00:17:24.200826 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 00:17:24.202242 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 00:17:24.203487 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 00:17:24.213544 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 00:17:24.263345 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 00:17:24.266076 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 00:17:24.267993 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 00:17:24.281472 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 00:17:24.282658 systemd[1]: Reached target basic.target - Basic System. Sep 12 00:17:24.283764 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 00:17:24.283904 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 00:17:24.285516 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 00:17:24.287783 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 00:17:24.309576 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 00:17:24.315888 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 00:17:24.324132 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 00:17:24.325509 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 00:17:24.347478 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 00:17:24.351547 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 00:17:24.352970 jq[1558]: false Sep 12 00:17:24.355458 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 00:17:24.366297 kernel: kvm_amd: TSC scaling supported Sep 12 00:17:24.366350 kernel: kvm_amd: Nested Virtualization enabled Sep 12 00:17:24.366365 kernel: kvm_amd: Nested Paging enabled Sep 12 00:17:24.367279 kernel: kvm_amd: LBR virtualization supported Sep 12 00:17:24.367297 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 00:17:24.368311 kernel: kvm_amd: Virtual GIF supported Sep 12 00:17:24.369819 extend-filesystems[1559]: Found /dev/vda6 Sep 12 00:17:24.371818 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 00:17:24.381130 extend-filesystems[1559]: Found /dev/vda9 Sep 12 00:17:24.382485 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 00:17:24.387717 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 12 00:17:24.387730 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Sep 12 00:17:24.388288 extend-filesystems[1559]: Checking size of /dev/vda9 Sep 12 00:17:24.426210 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 00:17:24.429268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:17:24.431773 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 00:17:24.432604 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 00:17:24.434640 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 00:17:24.437367 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 12 00:17:24.437367 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 00:17:24.437367 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 12 00:17:24.435359 oslogin_cache_refresh[1560]: Failure getting users, quitting Sep 12 00:17:24.435397 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 00:17:24.435463 oslogin_cache_refresh[1560]: Refreshing group entry cache Sep 12 00:17:24.439182 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 00:17:24.452142 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 12 00:17:24.452142 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 00:17:24.449639 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 00:17:24.446209 oslogin_cache_refresh[1560]: Failure getting groups, quitting Sep 12 00:17:24.451927 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 00:17:24.446221 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 00:17:24.452253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 00:17:24.453043 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 00:17:24.454418 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 00:17:24.456502 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 00:17:24.456810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 00:17:24.462186 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 00:17:24.462510 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 00:17:24.467158 update_engine[1581]: I20250912 00:17:24.467053 1581 main.cc:92] Flatcar Update Engine starting Sep 12 00:17:24.470128 kernel: EDAC MC: Ver: 3.0.0 Sep 12 00:17:24.486025 jq[1582]: true Sep 12 00:17:24.488464 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 00:17:24.509001 dbus-daemon[1555]: [system] SELinux support is enabled Sep 12 00:17:24.515747 update_engine[1581]: I20250912 00:17:24.515547 1581 update_check_scheduler.cc:74] Next update check in 11m49s Sep 12 00:17:24.515876 extend-filesystems[1559]: Resized partition /dev/vda9 Sep 12 00:17:24.524046 extend-filesystems[1603]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 00:17:24.525610 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 00:17:24.528072 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 00:17:24.531971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:17:24.532287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:17:24.535901 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:17:24.548518 tar[1585]: linux-amd64/LICENSE Sep 12 00:17:24.545900 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 00:17:24.546048 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 00:17:24.549235 tar[1585]: linux-amd64/helm Sep 12 00:17:24.556451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:17:24.557879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 00:17:24.557904 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 00:17:24.560663 systemd[1]: Started update-engine.service - Update Engine. Sep 12 00:17:24.563096 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 00:17:24.563144 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 00:17:24.565842 systemd-logind[1577]: New seat seat0. Sep 12 00:17:24.566292 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 00:17:24.575179 jq[1600]: true Sep 12 00:17:24.577516 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 00:17:24.623264 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 00:17:24.704806 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 00:17:24.743664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:17:24.985851 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 00:17:24.985851 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 00:17:24.985851 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 00:17:24.990654 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Sep 12 00:17:24.988250 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 00:17:24.993327 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 00:17:24.988593 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 00:17:25.021326 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Sep 12 00:17:25.023361 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 00:17:25.040068 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 00:17:25.050080 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 00:17:25.056179 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 00:17:25.137463 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 00:17:25.137885 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 00:17:25.139436 systemd-networkd[1499]: eth0: Gained IPv6LL Sep 12 00:17:25.142426 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 00:17:25.147286 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 00:17:25.150199 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 00:17:25.155561 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 00:17:25.162506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:25.166728 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 00:17:25.206161 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 00:17:25.211077 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 00:17:25.219930 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 00:17:25.220161 containerd[1593]: time="2025-09-12T00:17:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 00:17:25.221746 containerd[1593]: time="2025-09-12T00:17:25.221703329Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 00:17:25.222076 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 00:17:25.293416 containerd[1593]: time="2025-09-12T00:17:25.293264417Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.948µs" Sep 12 00:17:25.293416 containerd[1593]: time="2025-09-12T00:17:25.293331773Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 00:17:25.293416 containerd[1593]: time="2025-09-12T00:17:25.293354606Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 00:17:25.293647 containerd[1593]: time="2025-09-12T00:17:25.293620304Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 00:17:25.293647 containerd[1593]: time="2025-09-12T00:17:25.293643518Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 00:17:25.293713 containerd[1593]: time="2025-09-12T00:17:25.293673384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 00:17:25.293776 containerd[1593]: time="2025-09-12T00:17:25.293749456Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 00:17:25.293776 containerd[1593]: time="2025-09-12T00:17:25.293768422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294185 containerd[1593]: time="2025-09-12T00:17:25.294154957Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294185 containerd[1593]: time="2025-09-12T00:17:25.294177069Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294256 containerd[1593]: time="2025-09-12T00:17:25.294220250Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294256 containerd[1593]: time="2025-09-12T00:17:25.294234677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294384 containerd[1593]: time="2025-09-12T00:17:25.294358569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294688 containerd[1593]: time="2025-09-12T00:17:25.294660626Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294724 containerd[1593]: time="2025-09-12T00:17:25.294701282Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 00:17:25.294724 containerd[1593]: time="2025-09-12T00:17:25.294711942Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 00:17:25.294782 containerd[1593]: time="2025-09-12T00:17:25.294772455Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 00:17:25.295340 containerd[1593]: time="2025-09-12T00:17:25.295305856Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 00:17:25.295434 containerd[1593]: time="2025-09-12T00:17:25.295398950Z" level=info msg="metadata content store policy set" policy=shared Sep 12 00:17:25.307891 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 00:17:25.309581 containerd[1593]: time="2025-09-12T00:17:25.309523534Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 00:17:25.309668 containerd[1593]: time="2025-09-12T00:17:25.309645102Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 00:17:25.310299 containerd[1593]: time="2025-09-12T00:17:25.309672714Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 00:17:25.310349 containerd[1593]: time="2025-09-12T00:17:25.310299810Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 00:17:25.310349 containerd[1593]: time="2025-09-12T00:17:25.310321992Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 00:17:25.310349 containerd[1593]: time="2025-09-12T00:17:25.310336178Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 00:17:25.310440 containerd[1593]: time="2025-09-12T00:17:25.310354252Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 00:17:25.310440 containerd[1593]: time="2025-09-12T00:17:25.310375462Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 00:17:25.310440 containerd[1593]: time="2025-09-12T00:17:25.310392474Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 00:17:25.310440 containerd[1593]: time="2025-09-12T00:17:25.310417501Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 00:17:25.310440 containerd[1593]: time="2025-09-12T00:17:25.310430726Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 00:17:25.310605 containerd[1593]: time="2025-09-12T00:17:25.310446706Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 00:17:25.310682 containerd[1593]: time="2025-09-12T00:17:25.310650037Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 00:17:25.310729 containerd[1593]: time="2025-09-12T00:17:25.310684502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 00:17:25.310729 containerd[1593]: time="2025-09-12T00:17:25.310703367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 00:17:25.310729 containerd[1593]: time="2025-09-12T00:17:25.310721772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 00:17:25.310816 containerd[1593]: time="2025-09-12T00:17:25.310735357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 00:17:25.310816 containerd[1593]: time="2025-09-12T00:17:25.310749634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 00:17:25.310816 containerd[1593]: time="2025-09-12T00:17:25.310764612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 00:17:25.310816 containerd[1593]: time="2025-09-12T00:17:25.310778368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 00:17:25.310816 containerd[1593]: time="2025-09-12T00:17:25.310809717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 00:17:25.310944 containerd[1593]: time="2025-09-12T00:17:25.310827580Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 00:17:25.310944 containerd[1593]: time="2025-09-12T00:17:25.310841917Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 00:17:25.310994 containerd[1593]: time="2025-09-12T00:17:25.310970618Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 00:17:25.311019 containerd[1593]: time="2025-09-12T00:17:25.310994203Z" level=info msg="Start snapshots syncer" Sep 12 00:17:25.311905 containerd[1593]: time="2025-09-12T00:17:25.311866789Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 00:17:25.312539 containerd[1593]: time="2025-09-12T00:17:25.312474970Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 00:17:25.312722 containerd[1593]: time="2025-09-12T00:17:25.312572353Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 00:17:25.313373 containerd[1593]: time="2025-09-12T00:17:25.313338881Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 00:17:25.313605 containerd[1593]: time="2025-09-12T00:17:25.313572739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 00:17:25.313655 containerd[1593]: time="2025-09-12T00:17:25.313610540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 00:17:25.313655 containerd[1593]: time="2025-09-12T00:17:25.313625338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 00:17:25.313655 containerd[1593]: time="2025-09-12T00:17:25.313638493Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 00:17:25.313719 containerd[1593]: time="2025-09-12T00:17:25.313669681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 00:17:25.313719 containerd[1593]: time="2025-09-12T00:17:25.313691873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 00:17:25.313719 containerd[1593]: time="2025-09-12T00:17:25.313706400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 00:17:25.313781 containerd[1593]: time="2025-09-12T00:17:25.313740574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 00:17:25.313781 containerd[1593]: time="2025-09-12T00:17:25.313755502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 00:17:25.313781 containerd[1593]: time="2025-09-12T00:17:25.313769639Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 00:17:25.313965 containerd[1593]: time="2025-09-12T00:17:25.313938836Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 00:17:25.313995 containerd[1593]: time="2025-09-12T00:17:25.313965917Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 00:17:25.314074 containerd[1593]: time="2025-09-12T00:17:25.314051688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 00:17:25.314116 containerd[1593]: time="2025-09-12T00:17:25.314078328Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 00:17:25.314116 containerd[1593]: time="2025-09-12T00:17:25.314088376Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 00:17:25.314155 containerd[1593]: time="2025-09-12T00:17:25.314117491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 00:17:25.314155 containerd[1593]: time="2025-09-12T00:17:25.314133070Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 00:17:25.314192 containerd[1593]: time="2025-09-12T00:17:25.314157687Z" level=info msg="runtime interface created" Sep 12 00:17:25.314192 containerd[1593]: time="2025-09-12T00:17:25.314164930Z" level=info msg="created NRI interface" Sep 12 00:17:25.314192 containerd[1593]: time="2025-09-12T00:17:25.314173576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 00:17:25.314192 containerd[1593]: time="2025-09-12T00:17:25.314191300Z" level=info msg="Connect containerd service" Sep 12 00:17:25.314279 containerd[1593]: time="2025-09-12T00:17:25.314222318Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 00:17:25.315754 containerd[1593]: time="2025-09-12T00:17:25.315719917Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 00:17:25.321074 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 00:17:25.321615 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 00:17:25.364522 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 00:17:25.639292 containerd[1593]: time="2025-09-12T00:17:25.639164637Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 00:17:25.639292 containerd[1593]: time="2025-09-12T00:17:25.639254786Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 00:17:25.639292 containerd[1593]: time="2025-09-12T00:17:25.639294430Z" level=info msg="Start subscribing containerd event" Sep 12 00:17:25.639538 containerd[1593]: time="2025-09-12T00:17:25.639356777Z" level=info msg="Start recovering state" Sep 12 00:17:25.639538 containerd[1593]: time="2025-09-12T00:17:25.639500577Z" level=info msg="Start event monitor" Sep 12 00:17:25.639538 containerd[1593]: time="2025-09-12T00:17:25.639514202Z" level=info msg="Start cni network conf syncer for default" Sep 12 00:17:25.639538 containerd[1593]: time="2025-09-12T00:17:25.639524261Z" level=info msg="Start streaming server" Sep 12 00:17:25.639538 containerd[1593]: time="2025-09-12T00:17:25.639542445Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 00:17:25.639645 containerd[1593]: time="2025-09-12T00:17:25.639550180Z" level=info msg="runtime interface starting up..." Sep 12 00:17:25.639645 containerd[1593]: time="2025-09-12T00:17:25.639556341Z" level=info msg="starting plugins..." Sep 12 00:17:25.639645 containerd[1593]: time="2025-09-12T00:17:25.639570969Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 00:17:25.639825 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 00:17:25.642193 containerd[1593]: time="2025-09-12T00:17:25.642151479Z" level=info msg="containerd successfully booted in 0.424493s" Sep 12 00:17:25.777021 tar[1585]: linux-amd64/README.md Sep 12 00:17:25.805305 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 00:17:26.547332 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 00:17:26.549770 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:46220.service - OpenSSH per-connection server daemon (10.0.0.1:46220). Sep 12 00:17:26.653927 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 46220 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:17:26.656704 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:17:26.664709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 00:17:26.667499 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 00:17:26.678500 systemd-logind[1577]: New session 1 of user core. Sep 12 00:17:26.712947 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 00:17:26.717952 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 00:17:26.834856 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 00:17:26.837527 systemd-logind[1577]: New session c1 of user core. Sep 12 00:17:27.039068 systemd[1701]: Queued start job for default target default.target. Sep 12 00:17:27.055430 systemd[1701]: Created slice app.slice - User Application Slice. Sep 12 00:17:27.055456 systemd[1701]: Reached target paths.target - Paths. Sep 12 00:17:27.055496 systemd[1701]: Reached target timers.target - Timers. Sep 12 00:17:27.057449 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 00:17:27.069516 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 00:17:27.069677 systemd[1701]: Reached target sockets.target - Sockets. Sep 12 00:17:27.069736 systemd[1701]: Reached target basic.target - Basic System. Sep 12 00:17:27.069794 systemd[1701]: Reached target default.target - Main User Target. Sep 12 00:17:27.069839 systemd[1701]: Startup finished in 220ms. Sep 12 00:17:27.071897 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 00:17:27.074570 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 00:17:27.146700 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:46228.service - OpenSSH per-connection server daemon (10.0.0.1:46228). Sep 12 00:17:27.214580 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 46228 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:17:27.216420 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:17:27.221293 systemd-logind[1577]: New session 2 of user core. Sep 12 00:17:27.222226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:27.248277 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 00:17:27.249842 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 00:17:27.249933 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:17:27.251627 systemd[1]: Startup finished in 3.897s (kernel) + 8.074s (initrd) + 5.940s (userspace) = 17.912s. Sep 12 00:17:27.307820 sshd[1721]: Connection closed by 10.0.0.1 port 46228 Sep 12 00:17:27.308187 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Sep 12 00:17:27.318738 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:46228.service: Deactivated successfully. Sep 12 00:17:27.320485 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 00:17:27.321185 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Sep 12 00:17:27.323721 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:46234.service - OpenSSH per-connection server daemon (10.0.0.1:46234). Sep 12 00:17:27.324578 systemd-logind[1577]: Removed session 2. Sep 12 00:17:27.392576 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 46234 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:17:27.394498 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:17:27.400298 systemd-logind[1577]: New session 3 of user core. Sep 12 00:17:27.453393 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 00:17:27.503444 sshd[1739]: Connection closed by 10.0.0.1 port 46234 Sep 12 00:17:27.503763 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 12 00:17:27.514152 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:46234.service: Deactivated successfully. Sep 12 00:17:27.516482 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 00:17:27.517396 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Sep 12 00:17:27.520553 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:46238.service - OpenSSH per-connection server daemon (10.0.0.1:46238). Sep 12 00:17:27.521834 systemd-logind[1577]: Removed session 3. Sep 12 00:17:27.635490 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 46238 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:17:27.637383 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:17:27.642802 systemd-logind[1577]: New session 4 of user core. Sep 12 00:17:27.654247 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 00:17:27.712191 sshd[1748]: Connection closed by 10.0.0.1 port 46238 Sep 12 00:17:27.712570 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 12 00:17:27.726746 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:46238.service: Deactivated successfully. Sep 12 00:17:27.728978 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 00:17:27.730916 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Sep 12 00:17:27.733680 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:46242.service - OpenSSH per-connection server daemon (10.0.0.1:46242). Sep 12 00:17:27.734578 systemd-logind[1577]: Removed session 4. Sep 12 00:17:27.863297 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 46242 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:17:27.865246 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:17:27.870243 systemd-logind[1577]: New session 5 of user core. Sep 12 00:17:27.884442 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 00:17:27.970541 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 00:17:27.970963 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:17:27.978987 kubelet[1719]: E0912 00:17:27.978813 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:17:27.983838 sudo[1759]: pam_unix(sudo:session): session closed for user root Sep 12 00:17:27.985606 sshd[1758]: Connection closed by 10.0.0.1 port 46242 Sep 12 00:17:27.985986 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Sep 12 00:17:27.986275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:17:27.986528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:17:27.986993 systemd[1]: kubelet.service: Consumed 2.453s CPU time, 266M memory peak. Sep 12 00:17:27.995925 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:46242.service: Deactivated successfully. Sep 12 00:17:27.997877 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 00:17:27.998692 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Sep 12 00:17:28.001895 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:46248.service - OpenSSH per-connection server daemon (10.0.0.1:46248). Sep 12 00:17:28.002624 systemd-logind[1577]: Removed session 5. Sep 12 00:17:28.059611 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 46248 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:17:28.061007 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:17:28.065547 systemd-logind[1577]: New session 6 of user core. Sep 12 00:17:28.075221 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 00:17:28.128376 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 00:17:28.128847 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:17:28.329846 sudo[1771]: pam_unix(sudo:session): session closed for user root Sep 12 00:17:28.336897 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 00:17:28.337232 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:17:28.347544 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:17:28.397170 augenrules[1793]: No rules Sep 12 00:17:28.398677 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:17:28.398954 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:17:28.400028 sudo[1770]: pam_unix(sudo:session): session closed for user root Sep 12 00:17:28.401496 sshd[1769]: Connection closed by 10.0.0.1 port 46248 Sep 12 00:17:28.401831 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Sep 12 00:17:28.419383 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:46248.service: Deactivated successfully. Sep 12 00:17:28.421507 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 00:17:28.422290 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Sep 12 00:17:28.425539 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:46260.service - OpenSSH per-connection server daemon (10.0.0.1:46260). Sep 12 00:17:28.426248 systemd-logind[1577]: Removed session 6. Sep 12 00:17:28.488957 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 46260 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:17:28.490772 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:17:28.495432 systemd-logind[1577]: New session 7 of user core. Sep 12 00:17:28.509288 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 00:17:28.563462 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 00:17:28.563781 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:17:29.080866 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 00:17:29.108500 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 00:17:29.708053 dockerd[1826]: time="2025-09-12T00:17:29.707960144Z" level=info msg="Starting up" Sep 12 00:17:29.709356 dockerd[1826]: time="2025-09-12T00:17:29.709316037Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 00:17:29.741892 dockerd[1826]: time="2025-09-12T00:17:29.741820526Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 00:17:30.356156 dockerd[1826]: time="2025-09-12T00:17:30.356052199Z" level=info msg="Loading containers: start." Sep 12 00:17:30.368135 kernel: Initializing XFRM netlink socket Sep 12 00:17:30.690924 systemd-networkd[1499]: docker0: Link UP Sep 12 00:17:30.697059 dockerd[1826]: time="2025-09-12T00:17:30.697009558Z" level=info msg="Loading containers: done." Sep 12 00:17:30.713919 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2459622714-merged.mount: Deactivated successfully. Sep 12 00:17:30.716540 dockerd[1826]: time="2025-09-12T00:17:30.716480139Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 00:17:30.716838 dockerd[1826]: time="2025-09-12T00:17:30.716638376Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 00:17:30.716838 dockerd[1826]: time="2025-09-12T00:17:30.716784951Z" level=info msg="Initializing buildkit" Sep 12 00:17:30.823875 dockerd[1826]: time="2025-09-12T00:17:30.823819237Z" level=info msg="Completed buildkit initialization" Sep 12 00:17:30.829930 dockerd[1826]: time="2025-09-12T00:17:30.829880055Z" level=info msg="Daemon has completed initialization" Sep 12 00:17:30.830043 dockerd[1826]: time="2025-09-12T00:17:30.829948544Z" level=info msg="API listen on /run/docker.sock" Sep 12 00:17:30.830177 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 00:17:31.929313 containerd[1593]: time="2025-09-12T00:17:31.929216586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 00:17:32.496706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084779770.mount: Deactivated successfully. Sep 12 00:17:34.156957 containerd[1593]: time="2025-09-12T00:17:34.156892540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:34.157720 containerd[1593]: time="2025-09-12T00:17:34.157662674Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 00:17:34.159088 containerd[1593]: time="2025-09-12T00:17:34.159030890Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:34.161801 containerd[1593]: time="2025-09-12T00:17:34.161741304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:34.162851 containerd[1593]: time="2025-09-12T00:17:34.162814067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.233521007s" Sep 12 00:17:34.162887 containerd[1593]: time="2025-09-12T00:17:34.162858520Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 00:17:34.164139 containerd[1593]: time="2025-09-12T00:17:34.164089148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 00:17:35.989134 containerd[1593]: time="2025-09-12T00:17:35.989031499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:35.989832 containerd[1593]: time="2025-09-12T00:17:35.989792416Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 00:17:35.991280 containerd[1593]: time="2025-09-12T00:17:35.991223801Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:35.995349 containerd[1593]: time="2025-09-12T00:17:35.995287453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:35.996808 containerd[1593]: time="2025-09-12T00:17:35.996755336Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.832616214s" Sep 12 00:17:35.996875 containerd[1593]: time="2025-09-12T00:17:35.996806142Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 00:17:35.997644 containerd[1593]: time="2025-09-12T00:17:35.997595783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 00:17:37.799275 containerd[1593]: time="2025-09-12T00:17:37.799200518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:37.799940 containerd[1593]: time="2025-09-12T00:17:37.799901383Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 00:17:37.801162 containerd[1593]: time="2025-09-12T00:17:37.801132041Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:37.803916 containerd[1593]: time="2025-09-12T00:17:37.803862062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:37.804704 containerd[1593]: time="2025-09-12T00:17:37.804671240Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.807042435s" Sep 12 00:17:37.804704 containerd[1593]: time="2025-09-12T00:17:37.804702328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 00:17:37.805237 containerd[1593]: time="2025-09-12T00:17:37.805205251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 00:17:38.086824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 00:17:38.089639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:38.385445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:38.397423 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:17:38.571126 kubelet[2122]: E0912 00:17:38.571061 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:17:38.577497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:17:38.577688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:17:38.578072 systemd[1]: kubelet.service: Consumed 366ms CPU time, 111.2M memory peak. Sep 12 00:17:39.422789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50790384.mount: Deactivated successfully. Sep 12 00:17:40.296513 containerd[1593]: time="2025-09-12T00:17:40.296431019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:40.297444 containerd[1593]: time="2025-09-12T00:17:40.297406138Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 00:17:40.298500 containerd[1593]: time="2025-09-12T00:17:40.298456919Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:40.300585 containerd[1593]: time="2025-09-12T00:17:40.300546378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:40.301274 containerd[1593]: time="2025-09-12T00:17:40.301236793Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.495998951s" Sep 12 00:17:40.301313 containerd[1593]: time="2025-09-12T00:17:40.301275446Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 00:17:40.301913 containerd[1593]: time="2025-09-12T00:17:40.301840285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 00:17:40.913522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864131962.mount: Deactivated successfully. Sep 12 00:17:42.310978 containerd[1593]: time="2025-09-12T00:17:42.310874178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:42.312282 containerd[1593]: time="2025-09-12T00:17:42.312240551Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 00:17:42.313656 containerd[1593]: time="2025-09-12T00:17:42.313607044Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:42.316392 containerd[1593]: time="2025-09-12T00:17:42.316349048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:42.317234 containerd[1593]: time="2025-09-12T00:17:42.317175889Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.015292002s" Sep 12 00:17:42.317234 containerd[1593]: time="2025-09-12T00:17:42.317213860Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 00:17:42.317766 containerd[1593]: time="2025-09-12T00:17:42.317707696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 00:17:42.815997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656815991.mount: Deactivated successfully. Sep 12 00:17:42.822740 containerd[1593]: time="2025-09-12T00:17:42.822686552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:17:42.823524 containerd[1593]: time="2025-09-12T00:17:42.823492845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 00:17:42.824624 containerd[1593]: time="2025-09-12T00:17:42.824597236Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:17:42.826528 containerd[1593]: time="2025-09-12T00:17:42.826486460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:17:42.827115 containerd[1593]: time="2025-09-12T00:17:42.827053393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 509.316482ms" Sep 12 00:17:42.827115 containerd[1593]: time="2025-09-12T00:17:42.827081987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 00:17:42.827535 containerd[1593]: time="2025-09-12T00:17:42.827489100Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 00:17:44.108939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122945399.mount: Deactivated successfully. Sep 12 00:17:45.682873 containerd[1593]: time="2025-09-12T00:17:45.682772816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:45.683498 containerd[1593]: time="2025-09-12T00:17:45.683439717Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 00:17:45.684667 containerd[1593]: time="2025-09-12T00:17:45.684626774Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:45.687484 containerd[1593]: time="2025-09-12T00:17:45.687412038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:45.688588 containerd[1593]: time="2025-09-12T00:17:45.688527721Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.861010438s" Sep 12 00:17:45.688588 containerd[1593]: time="2025-09-12T00:17:45.688565111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 00:17:48.235700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:48.235925 systemd[1]: kubelet.service: Consumed 366ms CPU time, 111.2M memory peak. Sep 12 00:17:48.238290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:48.262974 systemd[1]: Reload requested from client PID 2279 ('systemctl') (unit session-7.scope)... Sep 12 00:17:48.262991 systemd[1]: Reloading... Sep 12 00:17:48.337124 zram_generator::config[2322]: No configuration found. Sep 12 00:17:48.732388 systemd[1]: Reloading finished in 468 ms. Sep 12 00:17:48.818317 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 00:17:48.818462 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 00:17:48.818936 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:48.818996 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.2M memory peak. Sep 12 00:17:48.821342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:49.004504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:49.024502 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 00:17:49.068726 kubelet[2370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:49.068726 kubelet[2370]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 00:17:49.068726 kubelet[2370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:49.069215 kubelet[2370]: I0912 00:17:49.068804 2370 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 00:17:49.387039 kubelet[2370]: I0912 00:17:49.386973 2370 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 00:17:49.387039 kubelet[2370]: I0912 00:17:49.387014 2370 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 00:17:49.387336 kubelet[2370]: I0912 00:17:49.387311 2370 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 00:17:49.412079 kubelet[2370]: E0912 00:17:49.412039 2370 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:49.413331 kubelet[2370]: I0912 00:17:49.413298 2370 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 00:17:49.421208 kubelet[2370]: I0912 00:17:49.421168 2370 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 00:17:49.428075 kubelet[2370]: I0912 00:17:49.428028 2370 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 00:17:49.428454 kubelet[2370]: I0912 00:17:49.428411 2370 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 00:17:49.428654 kubelet[2370]: I0912 00:17:49.428445 2370 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 00:17:49.429211 kubelet[2370]: I0912 00:17:49.429190 2370 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 00:17:49.429211 kubelet[2370]: I0912 00:17:49.429208 2370 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 00:17:49.429421 kubelet[2370]: I0912 00:17:49.429395 2370 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:49.432001 kubelet[2370]: I0912 00:17:49.431960 2370 kubelet.go:446] "Attempting to sync node with API server" Sep 12 00:17:49.432001 kubelet[2370]: I0912 00:17:49.431995 2370 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 00:17:49.432081 kubelet[2370]: I0912 00:17:49.432030 2370 kubelet.go:352] "Adding apiserver pod source" Sep 12 00:17:49.432081 kubelet[2370]: I0912 00:17:49.432048 2370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 00:17:49.433482 kubelet[2370]: W0912 00:17:49.433423 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:49.433482 kubelet[2370]: E0912 00:17:49.433481 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:49.434091 kubelet[2370]: W0912 00:17:49.434041 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:49.434091 kubelet[2370]: E0912 00:17:49.434084 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:49.435410 kubelet[2370]: I0912 00:17:49.435389 2370 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 00:17:49.435821 kubelet[2370]: I0912 00:17:49.435801 2370 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 00:17:49.435889 kubelet[2370]: W0912 00:17:49.435875 2370 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 00:17:49.437636 kubelet[2370]: I0912 00:17:49.437607 2370 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 00:17:49.437636 kubelet[2370]: I0912 00:17:49.437640 2370 server.go:1287] "Started kubelet" Sep 12 00:17:49.437911 kubelet[2370]: I0912 00:17:49.437848 2370 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 00:17:49.439217 kubelet[2370]: I0912 00:17:49.439045 2370 server.go:479] "Adding debug handlers to kubelet server" Sep 12 00:17:49.442082 kubelet[2370]: I0912 00:17:49.441517 2370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 00:17:49.443250 kubelet[2370]: I0912 00:17:49.443226 2370 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 00:17:49.443696 kubelet[2370]: I0912 00:17:49.443664 2370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 00:17:49.445318 kubelet[2370]: E0912 00:17:49.445290 2370 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 00:17:49.445388 kubelet[2370]: I0912 00:17:49.445370 2370 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 00:17:49.445507 kubelet[2370]: I0912 00:17:49.445487 2370 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 00:17:49.445566 kubelet[2370]: I0912 00:17:49.445549 2370 reconciler.go:26] "Reconciler: start to sync state" Sep 12 00:17:49.446059 kubelet[2370]: W0912 00:17:49.446022 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:49.446113 kubelet[2370]: E0912 00:17:49.446071 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:49.446439 kubelet[2370]: E0912 00:17:49.446412 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:49.447351 kubelet[2370]: E0912 00:17:49.447317 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="200ms" Sep 12 00:17:49.447480 kubelet[2370]: I0912 00:17:49.447458 2370 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 00:17:49.448324 kubelet[2370]: E0912 00:17:49.443446 2370 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.88:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.88:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186460dfec2c17cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 00:17:49.437622221 +0000 UTC m=+0.409035435,LastTimestamp:2025-09-12 00:17:49.437622221 +0000 UTC m=+0.409035435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 00:17:49.448463 kubelet[2370]: I0912 00:17:49.448446 2370 factory.go:221] Registration of the containerd container factory successfully Sep 12 00:17:49.448463 kubelet[2370]: I0912 00:17:49.448461 2370 factory.go:221] Registration of the systemd container factory successfully Sep 12 00:17:49.448596 kubelet[2370]: I0912 00:17:49.448575 2370 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 00:17:49.462477 kubelet[2370]: I0912 00:17:49.462429 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 00:17:49.464354 kubelet[2370]: I0912 00:17:49.464332 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 00:17:49.464435 kubelet[2370]: I0912 00:17:49.464364 2370 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 00:17:49.464435 kubelet[2370]: I0912 00:17:49.464394 2370 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 00:17:49.464435 kubelet[2370]: I0912 00:17:49.464402 2370 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 00:17:49.464501 kubelet[2370]: E0912 00:17:49.464459 2370 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 00:17:49.464619 kubelet[2370]: I0912 00:17:49.464591 2370 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 00:17:49.464619 kubelet[2370]: I0912 00:17:49.464613 2370 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 00:17:49.464676 kubelet[2370]: I0912 00:17:49.464635 2370 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:49.465919 kubelet[2370]: W0912 00:17:49.465877 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:49.466192 kubelet[2370]: E0912 00:17:49.466164 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:49.547168 kubelet[2370]: E0912 00:17:49.547122 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:49.565464 kubelet[2370]: E0912 00:17:49.565433 2370 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 00:17:49.648175 kubelet[2370]: E0912 00:17:49.647994 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:49.649334 kubelet[2370]: E0912 00:17:49.649219 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="400ms" Sep 12 00:17:49.748154 kubelet[2370]: E0912 00:17:49.748075 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:49.766447 kubelet[2370]: E0912 00:17:49.766400 2370 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 00:17:49.849044 kubelet[2370]: E0912 00:17:49.848973 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:49.922674 kubelet[2370]: I0912 00:17:49.922523 2370 policy_none.go:49] "None policy: Start" Sep 12 00:17:49.922674 kubelet[2370]: I0912 00:17:49.922572 2370 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 00:17:49.922674 kubelet[2370]: I0912 00:17:49.922592 2370 state_mem.go:35] "Initializing new in-memory state store" Sep 12 00:17:49.931983 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 00:17:49.949465 kubelet[2370]: E0912 00:17:49.949430 2370 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:49.952299 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 00:17:49.955835 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 00:17:49.975585 kubelet[2370]: I0912 00:17:49.975558 2370 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 00:17:49.975977 kubelet[2370]: I0912 00:17:49.975943 2370 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 00:17:49.976112 kubelet[2370]: I0912 00:17:49.975964 2370 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 00:17:49.976266 kubelet[2370]: I0912 00:17:49.976247 2370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 00:17:49.977456 kubelet[2370]: E0912 00:17:49.977402 2370 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 00:17:49.977568 kubelet[2370]: E0912 00:17:49.977548 2370 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 00:17:50.049891 kubelet[2370]: E0912 00:17:50.049843 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="800ms" Sep 12 00:17:50.078379 kubelet[2370]: I0912 00:17:50.078316 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:50.078922 kubelet[2370]: E0912 00:17:50.078890 2370 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Sep 12 00:17:50.177299 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 00:17:50.191090 kubelet[2370]: E0912 00:17:50.191042 2370 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:50.194554 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 00:17:50.209427 kubelet[2370]: E0912 00:17:50.209396 2370 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:50.212136 systemd[1]: Created slice kubepods-burstable-pod180f4175a7fd877fda925dfe851833aa.slice - libcontainer container kubepods-burstable-pod180f4175a7fd877fda925dfe851833aa.slice. Sep 12 00:17:50.214255 kubelet[2370]: E0912 00:17:50.214221 2370 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:50.236852 kubelet[2370]: W0912 00:17:50.236795 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:50.236922 kubelet[2370]: E0912 00:17:50.236851 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:50.249706 kubelet[2370]: I0912 00:17:50.249653 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180f4175a7fd877fda925dfe851833aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"180f4175a7fd877fda925dfe851833aa\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:50.249706 kubelet[2370]: I0912 00:17:50.249701 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:50.249815 kubelet[2370]: I0912 00:17:50.249736 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:50.249815 kubelet[2370]: I0912 00:17:50.249764 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:50.249815 kubelet[2370]: I0912 00:17:50.249784 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:50.249985 kubelet[2370]: I0912 00:17:50.249805 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180f4175a7fd877fda925dfe851833aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"180f4175a7fd877fda925dfe851833aa\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:50.249985 kubelet[2370]: I0912 00:17:50.249842 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180f4175a7fd877fda925dfe851833aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"180f4175a7fd877fda925dfe851833aa\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:50.249985 kubelet[2370]: I0912 00:17:50.249862 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:50.249985 kubelet[2370]: I0912 00:17:50.249917 2370 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:50.281176 kubelet[2370]: I0912 00:17:50.281092 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:50.281606 kubelet[2370]: E0912 00:17:50.281562 2370 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Sep 12 00:17:50.292223 kubelet[2370]: W0912 00:17:50.292190 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:50.292304 kubelet[2370]: E0912 00:17:50.292224 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:50.458261 kubelet[2370]: W0912 00:17:50.458088 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:50.458261 kubelet[2370]: E0912 00:17:50.458171 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:50.491819 kubelet[2370]: E0912 00:17:50.491786 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:50.492480 containerd[1593]: time="2025-09-12T00:17:50.492444746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:50.510694 kubelet[2370]: E0912 00:17:50.510643 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:50.511147 containerd[1593]: time="2025-09-12T00:17:50.511091683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:50.515899 kubelet[2370]: E0912 00:17:50.515529 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:50.516184 containerd[1593]: time="2025-09-12T00:17:50.516139290Z" level=info msg="connecting to shim e60db2f4cf8ba95b1748948fd16f9df3186441003e5201a3fb02a85f69feec70" address="unix:///run/containerd/s/3e79cbd518fd1a2a81e57d7d3a3f9a3608ce0498e8971ad20cc84114a08c0ede" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:50.516249 containerd[1593]: time="2025-09-12T00:17:50.516214652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:180f4175a7fd877fda925dfe851833aa,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:50.543029 containerd[1593]: time="2025-09-12T00:17:50.542978353Z" level=info msg="connecting to shim 51d81de63394d36d403ae6b8f22650f40dbae1ab2ff312cc7b532158abd07e9b" address="unix:///run/containerd/s/6027cc73f9f331c008c5f4e4a4bdd724914003a4aba97a1ab5e41126cd4cf486" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:50.549269 systemd[1]: Started cri-containerd-e60db2f4cf8ba95b1748948fd16f9df3186441003e5201a3fb02a85f69feec70.scope - libcontainer container e60db2f4cf8ba95b1748948fd16f9df3186441003e5201a3fb02a85f69feec70. Sep 12 00:17:50.554454 containerd[1593]: time="2025-09-12T00:17:50.554405567Z" level=info msg="connecting to shim a749b3167aeadbd21cbef4c351dbacfccb3e8dda921dbdbdedff50987d8b2fed" address="unix:///run/containerd/s/f0e9e936ce4528bffdf8b44c134259378df1992a16b1199478ea46bb24efe37d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:50.570951 kubelet[2370]: W0912 00:17:50.570862 2370 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.88:6443: connect: connection refused Sep 12 00:17:50.570951 kubelet[2370]: E0912 00:17:50.570948 2370 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.88:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:50.580265 systemd[1]: Started cri-containerd-51d81de63394d36d403ae6b8f22650f40dbae1ab2ff312cc7b532158abd07e9b.scope - libcontainer container 51d81de63394d36d403ae6b8f22650f40dbae1ab2ff312cc7b532158abd07e9b. Sep 12 00:17:50.584397 systemd[1]: Started cri-containerd-a749b3167aeadbd21cbef4c351dbacfccb3e8dda921dbdbdedff50987d8b2fed.scope - libcontainer container a749b3167aeadbd21cbef4c351dbacfccb3e8dda921dbdbdedff50987d8b2fed. Sep 12 00:17:50.608373 containerd[1593]: time="2025-09-12T00:17:50.608331577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e60db2f4cf8ba95b1748948fd16f9df3186441003e5201a3fb02a85f69feec70\"" Sep 12 00:17:50.609749 kubelet[2370]: E0912 00:17:50.609723 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:50.612721 containerd[1593]: time="2025-09-12T00:17:50.612668421Z" level=info msg="CreateContainer within sandbox \"e60db2f4cf8ba95b1748948fd16f9df3186441003e5201a3fb02a85f69feec70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 00:17:50.638988 containerd[1593]: time="2025-09-12T00:17:50.638932295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d81de63394d36d403ae6b8f22650f40dbae1ab2ff312cc7b532158abd07e9b\"" Sep 12 00:17:50.641225 kubelet[2370]: E0912 00:17:50.640823 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:50.643557 containerd[1593]: time="2025-09-12T00:17:50.643512466Z" level=info msg="CreateContainer within sandbox \"51d81de63394d36d403ae6b8f22650f40dbae1ab2ff312cc7b532158abd07e9b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 00:17:50.643969 containerd[1593]: time="2025-09-12T00:17:50.643927985Z" level=info msg="Container 40f63c9a259313720d11defac0da275002f80e0ae4503859aeb2de72c847685c: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:50.683028 kubelet[2370]: I0912 00:17:50.682944 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:50.683495 kubelet[2370]: E0912 00:17:50.683454 2370 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.88:6443/api/v1/nodes\": dial tcp 10.0.0.88:6443: connect: connection refused" node="localhost" Sep 12 00:17:50.850965 kubelet[2370]: E0912 00:17:50.850922 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.88:6443: connect: connection refused" interval="1.6s" Sep 12 00:17:51.085976 containerd[1593]: time="2025-09-12T00:17:51.085904227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:180f4175a7fd877fda925dfe851833aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a749b3167aeadbd21cbef4c351dbacfccb3e8dda921dbdbdedff50987d8b2fed\"" Sep 12 00:17:51.086812 kubelet[2370]: E0912 00:17:51.086758 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:51.088610 containerd[1593]: time="2025-09-12T00:17:51.088582019Z" level=info msg="CreateContainer within sandbox \"a749b3167aeadbd21cbef4c351dbacfccb3e8dda921dbdbdedff50987d8b2fed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 00:17:51.098945 containerd[1593]: time="2025-09-12T00:17:51.098907577Z" level=info msg="CreateContainer within sandbox \"e60db2f4cf8ba95b1748948fd16f9df3186441003e5201a3fb02a85f69feec70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40f63c9a259313720d11defac0da275002f80e0ae4503859aeb2de72c847685c\"" Sep 12 00:17:51.099403 containerd[1593]: time="2025-09-12T00:17:51.099375745Z" level=info msg="StartContainer for \"40f63c9a259313720d11defac0da275002f80e0ae4503859aeb2de72c847685c\"" Sep 12 00:17:51.100608 containerd[1593]: time="2025-09-12T00:17:51.100568362Z" level=info msg="connecting to shim 40f63c9a259313720d11defac0da275002f80e0ae4503859aeb2de72c847685c" address="unix:///run/containerd/s/3e79cbd518fd1a2a81e57d7d3a3f9a3608ce0498e8971ad20cc84114a08c0ede" protocol=ttrpc version=3 Sep 12 00:17:51.101831 containerd[1593]: time="2025-09-12T00:17:51.101740511Z" level=info msg="Container ebf6305ffe9e5ed877727eb856dabc00656cdc0f9800a91e09f9087a9bfb104c: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:51.107012 containerd[1593]: time="2025-09-12T00:17:51.106969329Z" level=info msg="Container a81a1e585363388f6ecfcb4de89155abaf5109067901e9fc928613b3032ceacd: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:51.112482 containerd[1593]: time="2025-09-12T00:17:51.112431514Z" level=info msg="CreateContainer within sandbox \"51d81de63394d36d403ae6b8f22650f40dbae1ab2ff312cc7b532158abd07e9b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ebf6305ffe9e5ed877727eb856dabc00656cdc0f9800a91e09f9087a9bfb104c\"" Sep 12 00:17:51.113381 containerd[1593]: time="2025-09-12T00:17:51.113356339Z" level=info msg="StartContainer for \"ebf6305ffe9e5ed877727eb856dabc00656cdc0f9800a91e09f9087a9bfb104c\"" Sep 12 00:17:51.113892 containerd[1593]: time="2025-09-12T00:17:51.113851918Z" level=info msg="CreateContainer within sandbox \"a749b3167aeadbd21cbef4c351dbacfccb3e8dda921dbdbdedff50987d8b2fed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a81a1e585363388f6ecfcb4de89155abaf5109067901e9fc928613b3032ceacd\"" Sep 12 00:17:51.114227 containerd[1593]: time="2025-09-12T00:17:51.114197417Z" level=info msg="StartContainer for \"a81a1e585363388f6ecfcb4de89155abaf5109067901e9fc928613b3032ceacd\"" Sep 12 00:17:51.115689 containerd[1593]: time="2025-09-12T00:17:51.115614134Z" level=info msg="connecting to shim a81a1e585363388f6ecfcb4de89155abaf5109067901e9fc928613b3032ceacd" address="unix:///run/containerd/s/f0e9e936ce4528bffdf8b44c134259378df1992a16b1199478ea46bb24efe37d" protocol=ttrpc version=3 Sep 12 00:17:51.116406 containerd[1593]: time="2025-09-12T00:17:51.116291785Z" level=info msg="connecting to shim ebf6305ffe9e5ed877727eb856dabc00656cdc0f9800a91e09f9087a9bfb104c" address="unix:///run/containerd/s/6027cc73f9f331c008c5f4e4a4bdd724914003a4aba97a1ab5e41126cd4cf486" protocol=ttrpc version=3 Sep 12 00:17:51.131342 systemd[1]: Started cri-containerd-40f63c9a259313720d11defac0da275002f80e0ae4503859aeb2de72c847685c.scope - libcontainer container 40f63c9a259313720d11defac0da275002f80e0ae4503859aeb2de72c847685c. Sep 12 00:17:51.141405 systemd[1]: Started cri-containerd-a81a1e585363388f6ecfcb4de89155abaf5109067901e9fc928613b3032ceacd.scope - libcontainer container a81a1e585363388f6ecfcb4de89155abaf5109067901e9fc928613b3032ceacd. Sep 12 00:17:51.146750 systemd[1]: Started cri-containerd-ebf6305ffe9e5ed877727eb856dabc00656cdc0f9800a91e09f9087a9bfb104c.scope - libcontainer container ebf6305ffe9e5ed877727eb856dabc00656cdc0f9800a91e09f9087a9bfb104c. Sep 12 00:17:51.203043 containerd[1593]: time="2025-09-12T00:17:51.202964128Z" level=info msg="StartContainer for \"40f63c9a259313720d11defac0da275002f80e0ae4503859aeb2de72c847685c\" returns successfully" Sep 12 00:17:51.210627 containerd[1593]: time="2025-09-12T00:17:51.210587837Z" level=info msg="StartContainer for \"ebf6305ffe9e5ed877727eb856dabc00656cdc0f9800a91e09f9087a9bfb104c\" returns successfully" Sep 12 00:17:51.219361 containerd[1593]: time="2025-09-12T00:17:51.219307914Z" level=info msg="StartContainer for \"a81a1e585363388f6ecfcb4de89155abaf5109067901e9fc928613b3032ceacd\" returns successfully" Sep 12 00:17:51.475577 kubelet[2370]: E0912 00:17:51.475206 2370 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:51.475577 kubelet[2370]: E0912 00:17:51.475358 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:51.479515 kubelet[2370]: E0912 00:17:51.479347 2370 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:51.480716 kubelet[2370]: E0912 00:17:51.480607 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:51.481890 kubelet[2370]: E0912 00:17:51.481850 2370 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:51.482202 kubelet[2370]: E0912 00:17:51.482188 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:51.484975 kubelet[2370]: I0912 00:17:51.484924 2370 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:52.308837 kubelet[2370]: I0912 00:17:52.308737 2370 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 00:17:52.349194 kubelet[2370]: I0912 00:17:52.349133 2370 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:52.353497 kubelet[2370]: E0912 00:17:52.353448 2370 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:52.353497 kubelet[2370]: I0912 00:17:52.353475 2370 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:52.354740 kubelet[2370]: E0912 00:17:52.354711 2370 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:52.354740 kubelet[2370]: I0912 00:17:52.354734 2370 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:52.355860 kubelet[2370]: E0912 00:17:52.355832 2370 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:52.436409 kubelet[2370]: I0912 00:17:52.436358 2370 apiserver.go:52] "Watching apiserver" Sep 12 00:17:52.445848 kubelet[2370]: I0912 00:17:52.445821 2370 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 00:17:52.482795 kubelet[2370]: I0912 00:17:52.482745 2370 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:52.482954 kubelet[2370]: I0912 00:17:52.482810 2370 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:52.484724 kubelet[2370]: E0912 00:17:52.484699 2370 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:52.484802 kubelet[2370]: E0912 00:17:52.484746 2370 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:52.484888 kubelet[2370]: E0912 00:17:52.484863 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:52.485009 kubelet[2370]: E0912 00:17:52.484980 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:54.146935 kubelet[2370]: I0912 00:17:54.146883 2370 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:54.151436 kubelet[2370]: I0912 00:17:54.151380 2370 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:54.173721 kubelet[2370]: E0912 00:17:54.173669 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:54.174118 kubelet[2370]: E0912 00:17:54.174074 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:54.475622 systemd[1]: Reload requested from client PID 2654 ('systemctl') (unit session-7.scope)... Sep 12 00:17:54.475639 systemd[1]: Reloading... Sep 12 00:17:54.486122 kubelet[2370]: E0912 00:17:54.486063 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:54.486324 kubelet[2370]: E0912 00:17:54.486305 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:54.558136 zram_generator::config[2700]: No configuration found. Sep 12 00:17:54.789487 systemd[1]: Reloading finished in 313 ms. Sep 12 00:17:54.826770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:54.853756 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 00:17:54.854125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:54.854192 systemd[1]: kubelet.service: Consumed 910ms CPU time, 131.4M memory peak. Sep 12 00:17:54.856250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:55.073163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:55.089624 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 00:17:55.134907 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:55.134907 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 00:17:55.134907 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:55.135341 kubelet[2742]: I0912 00:17:55.134957 2742 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 00:17:55.143225 kubelet[2742]: I0912 00:17:55.143187 2742 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 00:17:55.143225 kubelet[2742]: I0912 00:17:55.143217 2742 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 00:17:55.143533 kubelet[2742]: I0912 00:17:55.143509 2742 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 00:17:55.145001 kubelet[2742]: I0912 00:17:55.144977 2742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 00:17:55.149164 kubelet[2742]: I0912 00:17:55.149129 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 00:17:55.154733 kubelet[2742]: I0912 00:17:55.154705 2742 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 00:17:55.161561 kubelet[2742]: I0912 00:17:55.161542 2742 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 00:17:55.161908 kubelet[2742]: I0912 00:17:55.161865 2742 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 00:17:55.162118 kubelet[2742]: I0912 00:17:55.161903 2742 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 00:17:55.162202 kubelet[2742]: I0912 00:17:55.162136 2742 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 00:17:55.162202 kubelet[2742]: I0912 00:17:55.162150 2742 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 00:17:55.162247 kubelet[2742]: I0912 00:17:55.162204 2742 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:55.162415 kubelet[2742]: I0912 00:17:55.162385 2742 kubelet.go:446] "Attempting to sync node with API server" Sep 12 00:17:55.162415 kubelet[2742]: I0912 00:17:55.162414 2742 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 00:17:55.162622 kubelet[2742]: I0912 00:17:55.162439 2742 kubelet.go:352] "Adding apiserver pod source" Sep 12 00:17:55.162622 kubelet[2742]: I0912 00:17:55.162453 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 00:17:55.163170 kubelet[2742]: I0912 00:17:55.163142 2742 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 00:17:55.163492 kubelet[2742]: I0912 00:17:55.163466 2742 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 00:17:55.164205 kubelet[2742]: I0912 00:17:55.163849 2742 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 00:17:55.164205 kubelet[2742]: I0912 00:17:55.163888 2742 server.go:1287] "Started kubelet" Sep 12 00:17:55.166171 kubelet[2742]: I0912 00:17:55.165281 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 00:17:55.166171 kubelet[2742]: I0912 00:17:55.165770 2742 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 00:17:55.166171 kubelet[2742]: I0912 00:17:55.165791 2742 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 00:17:55.167524 kubelet[2742]: I0912 00:17:55.167498 2742 server.go:479] "Adding debug handlers to kubelet server" Sep 12 00:17:55.169827 kubelet[2742]: I0912 00:17:55.169795 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 00:17:55.171548 kubelet[2742]: I0912 00:17:55.170371 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 00:17:55.173050 kubelet[2742]: I0912 00:17:55.173022 2742 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 00:17:55.173222 kubelet[2742]: E0912 00:17:55.173189 2742 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:55.173545 kubelet[2742]: E0912 00:17:55.173517 2742 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 00:17:55.175061 kubelet[2742]: I0912 00:17:55.175032 2742 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 00:17:55.175227 kubelet[2742]: I0912 00:17:55.175197 2742 reconciler.go:26] "Reconciler: start to sync state" Sep 12 00:17:55.177452 kubelet[2742]: I0912 00:17:55.177426 2742 factory.go:221] Registration of the systemd container factory successfully Sep 12 00:17:55.177555 kubelet[2742]: I0912 00:17:55.177529 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 00:17:55.180230 kubelet[2742]: I0912 00:17:55.180204 2742 factory.go:221] Registration of the containerd container factory successfully Sep 12 00:17:55.195415 kubelet[2742]: I0912 00:17:55.195361 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 00:17:55.196990 kubelet[2742]: I0912 00:17:55.196961 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 00:17:55.196990 kubelet[2742]: I0912 00:17:55.196985 2742 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 00:17:55.197140 kubelet[2742]: I0912 00:17:55.197004 2742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 00:17:55.197140 kubelet[2742]: I0912 00:17:55.197013 2742 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 00:17:55.197140 kubelet[2742]: E0912 00:17:55.197061 2742 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 00:17:55.221044 kubelet[2742]: I0912 00:17:55.221006 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 00:17:55.221044 kubelet[2742]: I0912 00:17:55.221023 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 00:17:55.221044 kubelet[2742]: I0912 00:17:55.221043 2742 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:55.221285 kubelet[2742]: I0912 00:17:55.221246 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 00:17:55.221285 kubelet[2742]: I0912 00:17:55.221257 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 00:17:55.221285 kubelet[2742]: I0912 00:17:55.221274 2742 policy_none.go:49] "None policy: Start" Sep 12 00:17:55.221285 kubelet[2742]: I0912 00:17:55.221285 2742 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 00:17:55.221383 kubelet[2742]: I0912 00:17:55.221294 2742 state_mem.go:35] "Initializing new in-memory state store" Sep 12 00:17:55.221409 kubelet[2742]: I0912 00:17:55.221384 2742 state_mem.go:75] "Updated machine memory state" Sep 12 00:17:55.225931 kubelet[2742]: I0912 00:17:55.225885 2742 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 00:17:55.226126 kubelet[2742]: I0912 00:17:55.226070 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 00:17:55.226126 kubelet[2742]: I0912 00:17:55.226083 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 00:17:55.226338 kubelet[2742]: I0912 00:17:55.226318 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 00:17:55.228046 kubelet[2742]: E0912 00:17:55.228028 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 00:17:55.298292 kubelet[2742]: I0912 00:17:55.298243 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:55.298457 kubelet[2742]: I0912 00:17:55.298241 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:55.298526 kubelet[2742]: I0912 00:17:55.298245 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:55.331902 kubelet[2742]: I0912 00:17:55.331775 2742 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:55.476720 kubelet[2742]: I0912 00:17:55.476664 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/180f4175a7fd877fda925dfe851833aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"180f4175a7fd877fda925dfe851833aa\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:55.476720 kubelet[2742]: I0912 00:17:55.476722 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:55.476942 kubelet[2742]: I0912 00:17:55.476751 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:55.476942 kubelet[2742]: I0912 00:17:55.476774 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:55.476942 kubelet[2742]: I0912 00:17:55.476811 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/180f4175a7fd877fda925dfe851833aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"180f4175a7fd877fda925dfe851833aa\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:55.476942 kubelet[2742]: I0912 00:17:55.476878 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/180f4175a7fd877fda925dfe851833aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"180f4175a7fd877fda925dfe851833aa\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:55.476942 kubelet[2742]: I0912 00:17:55.476918 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:55.477126 kubelet[2742]: I0912 00:17:55.476946 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:55.477126 kubelet[2742]: I0912 00:17:55.476967 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:55.596469 kubelet[2742]: E0912 00:17:55.596405 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:55.596810 kubelet[2742]: E0912 00:17:55.596789 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:55.598679 kubelet[2742]: E0912 00:17:55.598624 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:55.599937 kubelet[2742]: E0912 00:17:55.599807 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:55.601715 kubelet[2742]: I0912 00:17:55.601695 2742 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 00:17:55.601824 kubelet[2742]: I0912 00:17:55.601758 2742 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 00:17:55.872382 kubelet[2742]: E0912 00:17:55.872263 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:56.163468 kubelet[2742]: I0912 00:17:56.163254 2742 apiserver.go:52] "Watching apiserver" Sep 12 00:17:56.176320 kubelet[2742]: I0912 00:17:56.176212 2742 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 00:17:56.210193 kubelet[2742]: E0912 00:17:56.210135 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:56.210814 kubelet[2742]: I0912 00:17:56.210787 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:56.214131 kubelet[2742]: I0912 00:17:56.211975 2742 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:56.219529 kubelet[2742]: E0912 00:17:56.219115 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:56.219740 kubelet[2742]: E0912 00:17:56.219724 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:56.220798 kubelet[2742]: E0912 00:17:56.220774 2742 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:56.221043 kubelet[2742]: E0912 00:17:56.220920 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:56.595441 kubelet[2742]: I0912 00:17:56.595340 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.5953005989999998 podStartE2EDuration="2.595300599s" podCreationTimestamp="2025-09-12 00:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:17:56.591576805 +0000 UTC m=+1.497139727" watchObservedRunningTime="2025-09-12 00:17:56.595300599 +0000 UTC m=+1.500863491" Sep 12 00:17:56.601469 kubelet[2742]: I0912 00:17:56.601281 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.601258334 podStartE2EDuration="1.601258334s" podCreationTimestamp="2025-09-12 00:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:17:56.600875316 +0000 UTC m=+1.506438218" watchObservedRunningTime="2025-09-12 00:17:56.601258334 +0000 UTC m=+1.506821226" Sep 12 00:17:56.609490 kubelet[2742]: I0912 00:17:56.609397 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.609377524 podStartE2EDuration="2.609377524s" podCreationTimestamp="2025-09-12 00:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:17:56.609197746 +0000 UTC m=+1.514760639" watchObservedRunningTime="2025-09-12 00:17:56.609377524 +0000 UTC m=+1.514940416" Sep 12 00:17:57.211873 kubelet[2742]: E0912 00:17:57.211839 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:57.212364 kubelet[2742]: E0912 00:17:57.211911 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:57.212364 kubelet[2742]: E0912 00:17:57.211967 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:58.212536 kubelet[2742]: E0912 00:17:58.212491 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:00.221117 kubelet[2742]: I0912 00:18:00.221069 2742 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 00:18:00.221650 containerd[1593]: time="2025-09-12T00:18:00.221536818Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 00:18:00.221994 kubelet[2742]: I0912 00:18:00.221783 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 00:18:00.695388 kubelet[2742]: E0912 00:18:00.695347 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:01.014112 kubelet[2742]: E0912 00:18:01.013918 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:01.179753 systemd[1]: Created slice kubepods-besteffort-podd5e3566b_c418_4c72_95cb_9f716fe39fe3.slice - libcontainer container kubepods-besteffort-podd5e3566b_c418_4c72_95cb_9f716fe39fe3.slice. Sep 12 00:18:01.217346 kubelet[2742]: E0912 00:18:01.217312 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:01.270423 kubelet[2742]: I0912 00:18:01.270258 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5e3566b-c418-4c72-95cb-9f716fe39fe3-kube-proxy\") pod \"kube-proxy-t6r57\" (UID: \"d5e3566b-c418-4c72-95cb-9f716fe39fe3\") " pod="kube-system/kube-proxy-t6r57" Sep 12 00:18:01.270423 kubelet[2742]: I0912 00:18:01.270339 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5e3566b-c418-4c72-95cb-9f716fe39fe3-xtables-lock\") pod \"kube-proxy-t6r57\" (UID: \"d5e3566b-c418-4c72-95cb-9f716fe39fe3\") " pod="kube-system/kube-proxy-t6r57" Sep 12 00:18:01.270423 kubelet[2742]: I0912 00:18:01.270417 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5e3566b-c418-4c72-95cb-9f716fe39fe3-lib-modules\") pod \"kube-proxy-t6r57\" (UID: \"d5e3566b-c418-4c72-95cb-9f716fe39fe3\") " pod="kube-system/kube-proxy-t6r57" Sep 12 00:18:01.271000 kubelet[2742]: I0912 00:18:01.270437 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999qb\" (UniqueName: \"kubernetes.io/projected/d5e3566b-c418-4c72-95cb-9f716fe39fe3-kube-api-access-999qb\") pod \"kube-proxy-t6r57\" (UID: \"d5e3566b-c418-4c72-95cb-9f716fe39fe3\") " pod="kube-system/kube-proxy-t6r57" Sep 12 00:18:01.354683 systemd[1]: Created slice kubepods-besteffort-poda86e724d_83cd_48ce_a90b_90f1c1a2a74d.slice - libcontainer container kubepods-besteffort-poda86e724d_83cd_48ce_a90b_90f1c1a2a74d.slice. Sep 12 00:18:01.471435 kubelet[2742]: I0912 00:18:01.471371 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wjxp\" (UniqueName: \"kubernetes.io/projected/a86e724d-83cd-48ce-a90b-90f1c1a2a74d-kube-api-access-6wjxp\") pod \"tigera-operator-755d956888-sqff6\" (UID: \"a86e724d-83cd-48ce-a90b-90f1c1a2a74d\") " pod="tigera-operator/tigera-operator-755d956888-sqff6" Sep 12 00:18:01.471435 kubelet[2742]: I0912 00:18:01.471414 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a86e724d-83cd-48ce-a90b-90f1c1a2a74d-var-lib-calico\") pod \"tigera-operator-755d956888-sqff6\" (UID: \"a86e724d-83cd-48ce-a90b-90f1c1a2a74d\") " pod="tigera-operator/tigera-operator-755d956888-sqff6" Sep 12 00:18:01.494733 kubelet[2742]: E0912 00:18:01.494670 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:01.495428 containerd[1593]: time="2025-09-12T00:18:01.495383270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t6r57,Uid:d5e3566b-c418-4c72-95cb-9f716fe39fe3,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:01.517819 containerd[1593]: time="2025-09-12T00:18:01.517766766Z" level=info msg="connecting to shim caccbd44780080088d9fde77500b57fb1d3683279112eaeb566b296da02cdc59" address="unix:///run/containerd/s/3e669e3d7cb50463a7a001227bed24d1ad0ad186c8a8dedfbe19eb9a5d71ac52" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:01.549281 systemd[1]: Started cri-containerd-caccbd44780080088d9fde77500b57fb1d3683279112eaeb566b296da02cdc59.scope - libcontainer container caccbd44780080088d9fde77500b57fb1d3683279112eaeb566b296da02cdc59. Sep 12 00:18:01.590721 containerd[1593]: time="2025-09-12T00:18:01.590677140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t6r57,Uid:d5e3566b-c418-4c72-95cb-9f716fe39fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"caccbd44780080088d9fde77500b57fb1d3683279112eaeb566b296da02cdc59\"" Sep 12 00:18:01.591734 kubelet[2742]: E0912 00:18:01.591706 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:01.594789 containerd[1593]: time="2025-09-12T00:18:01.594211102Z" level=info msg="CreateContainer within sandbox \"caccbd44780080088d9fde77500b57fb1d3683279112eaeb566b296da02cdc59\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 00:18:01.619905 containerd[1593]: time="2025-09-12T00:18:01.619851750Z" level=info msg="Container f6c913c2c414f68eb78628acb00f79214b195fce38bb5fac7b8b705c40f5319a: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:01.659708 containerd[1593]: time="2025-09-12T00:18:01.659652914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-sqff6,Uid:a86e724d-83cd-48ce-a90b-90f1c1a2a74d,Namespace:tigera-operator,Attempt:0,}" Sep 12 00:18:02.167969 containerd[1593]: time="2025-09-12T00:18:02.167901886Z" level=info msg="CreateContainer within sandbox \"caccbd44780080088d9fde77500b57fb1d3683279112eaeb566b296da02cdc59\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f6c913c2c414f68eb78628acb00f79214b195fce38bb5fac7b8b705c40f5319a\"" Sep 12 00:18:02.168482 containerd[1593]: time="2025-09-12T00:18:02.168453432Z" level=info msg="StartContainer for \"f6c913c2c414f68eb78628acb00f79214b195fce38bb5fac7b8b705c40f5319a\"" Sep 12 00:18:02.169921 containerd[1593]: time="2025-09-12T00:18:02.169891203Z" level=info msg="connecting to shim f6c913c2c414f68eb78628acb00f79214b195fce38bb5fac7b8b705c40f5319a" address="unix:///run/containerd/s/3e669e3d7cb50463a7a001227bed24d1ad0ad186c8a8dedfbe19eb9a5d71ac52" protocol=ttrpc version=3 Sep 12 00:18:02.191290 systemd[1]: Started cri-containerd-f6c913c2c414f68eb78628acb00f79214b195fce38bb5fac7b8b705c40f5319a.scope - libcontainer container f6c913c2c414f68eb78628acb00f79214b195fce38bb5fac7b8b705c40f5319a. Sep 12 00:18:02.193000 containerd[1593]: time="2025-09-12T00:18:02.192835235Z" level=info msg="connecting to shim d9cf2a13bc538c05da163e3ea49ec219d8178ee352f645587a6c13f6015a515e" address="unix:///run/containerd/s/c3f9ee59956afc256e9b9d17b80331cad84e86fc007980394cd99257d812b157" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:02.231279 systemd[1]: Started cri-containerd-d9cf2a13bc538c05da163e3ea49ec219d8178ee352f645587a6c13f6015a515e.scope - libcontainer container d9cf2a13bc538c05da163e3ea49ec219d8178ee352f645587a6c13f6015a515e. Sep 12 00:18:02.260376 containerd[1593]: time="2025-09-12T00:18:02.260288189Z" level=info msg="StartContainer for \"f6c913c2c414f68eb78628acb00f79214b195fce38bb5fac7b8b705c40f5319a\" returns successfully" Sep 12 00:18:02.290252 containerd[1593]: time="2025-09-12T00:18:02.290167193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-sqff6,Uid:a86e724d-83cd-48ce-a90b-90f1c1a2a74d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d9cf2a13bc538c05da163e3ea49ec219d8178ee352f645587a6c13f6015a515e\"" Sep 12 00:18:02.293125 containerd[1593]: time="2025-09-12T00:18:02.292388585Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 00:18:02.383389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87107113.mount: Deactivated successfully. Sep 12 00:18:03.226480 kubelet[2742]: E0912 00:18:03.226421 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:03.235423 kubelet[2742]: I0912 00:18:03.235338 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t6r57" podStartSLOduration=2.23531558 podStartE2EDuration="2.23531558s" podCreationTimestamp="2025-09-12 00:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:18:03.234944369 +0000 UTC m=+8.140507281" watchObservedRunningTime="2025-09-12 00:18:03.23531558 +0000 UTC m=+8.140878462" Sep 12 00:18:03.933200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564156328.mount: Deactivated successfully. Sep 12 00:18:04.230274 kubelet[2742]: E0912 00:18:04.229632 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:04.290330 containerd[1593]: time="2025-09-12T00:18:04.290253775Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:04.290965 containerd[1593]: time="2025-09-12T00:18:04.290915099Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 00:18:04.292192 containerd[1593]: time="2025-09-12T00:18:04.292154685Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:04.295960 containerd[1593]: time="2025-09-12T00:18:04.294624500Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:04.295960 containerd[1593]: time="2025-09-12T00:18:04.295710975Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.003284127s" Sep 12 00:18:04.295960 containerd[1593]: time="2025-09-12T00:18:04.295749118Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 00:18:04.298907 containerd[1593]: time="2025-09-12T00:18:04.298865769Z" level=info msg="CreateContainer within sandbox \"d9cf2a13bc538c05da163e3ea49ec219d8178ee352f645587a6c13f6015a515e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 00:18:04.308168 containerd[1593]: time="2025-09-12T00:18:04.308118283Z" level=info msg="Container 264fb1e36d882448b09a6defa333bdda84a8d1ece57b8b39883602787d13b53d: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:04.315632 containerd[1593]: time="2025-09-12T00:18:04.315577555Z" level=info msg="CreateContainer within sandbox \"d9cf2a13bc538c05da163e3ea49ec219d8178ee352f645587a6c13f6015a515e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"264fb1e36d882448b09a6defa333bdda84a8d1ece57b8b39883602787d13b53d\"" Sep 12 00:18:04.317719 containerd[1593]: time="2025-09-12T00:18:04.316315403Z" level=info msg="StartContainer for \"264fb1e36d882448b09a6defa333bdda84a8d1ece57b8b39883602787d13b53d\"" Sep 12 00:18:04.317719 containerd[1593]: time="2025-09-12T00:18:04.317122724Z" level=info msg="connecting to shim 264fb1e36d882448b09a6defa333bdda84a8d1ece57b8b39883602787d13b53d" address="unix:///run/containerd/s/c3f9ee59956afc256e9b9d17b80331cad84e86fc007980394cd99257d812b157" protocol=ttrpc version=3 Sep 12 00:18:04.376246 systemd[1]: Started cri-containerd-264fb1e36d882448b09a6defa333bdda84a8d1ece57b8b39883602787d13b53d.scope - libcontainer container 264fb1e36d882448b09a6defa333bdda84a8d1ece57b8b39883602787d13b53d. Sep 12 00:18:04.409825 containerd[1593]: time="2025-09-12T00:18:04.409760702Z" level=info msg="StartContainer for \"264fb1e36d882448b09a6defa333bdda84a8d1ece57b8b39883602787d13b53d\" returns successfully" Sep 12 00:18:06.637210 kubelet[2742]: E0912 00:18:06.637156 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:06.668761 kubelet[2742]: I0912 00:18:06.668645 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-sqff6" podStartSLOduration=3.663541753 podStartE2EDuration="5.668618837s" podCreationTimestamp="2025-09-12 00:18:01 +0000 UTC" firstStartedPulling="2025-09-12 00:18:02.291671642 +0000 UTC m=+7.197234524" lastFinishedPulling="2025-09-12 00:18:04.296748716 +0000 UTC m=+9.202311608" observedRunningTime="2025-09-12 00:18:05.241908499 +0000 UTC m=+10.147471401" watchObservedRunningTime="2025-09-12 00:18:06.668618837 +0000 UTC m=+11.574181719" Sep 12 00:18:07.236193 kubelet[2742]: E0912 00:18:07.236141 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:09.344309 update_engine[1581]: I20250912 00:18:09.344190 1581 update_attempter.cc:509] Updating boot flags... Sep 12 00:18:10.705421 kubelet[2742]: E0912 00:18:10.705369 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:10.861350 sudo[1806]: pam_unix(sudo:session): session closed for user root Sep 12 00:18:10.864679 sshd[1805]: Connection closed by 10.0.0.1 port 46260 Sep 12 00:18:10.868266 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:10.882976 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:46260.service: Deactivated successfully. Sep 12 00:18:10.885959 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 00:18:10.887022 systemd[1]: session-7.scope: Consumed 5.712s CPU time, 227.5M memory peak. Sep 12 00:18:10.890272 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Sep 12 00:18:10.892242 systemd-logind[1577]: Removed session 7. Sep 12 00:18:11.244895 kubelet[2742]: E0912 00:18:11.244843 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:13.528945 systemd[1]: Created slice kubepods-besteffort-podb354f1ed_6feb_47b1_8592_fd95f15932f2.slice - libcontainer container kubepods-besteffort-podb354f1ed_6feb_47b1_8592_fd95f15932f2.slice. Sep 12 00:18:13.545887 kubelet[2742]: I0912 00:18:13.545769 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b354f1ed-6feb-47b1-8592-fd95f15932f2-typha-certs\") pod \"calico-typha-74d596f9d9-2d7zt\" (UID: \"b354f1ed-6feb-47b1-8592-fd95f15932f2\") " pod="calico-system/calico-typha-74d596f9d9-2d7zt" Sep 12 00:18:13.545887 kubelet[2742]: I0912 00:18:13.545823 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg89f\" (UniqueName: \"kubernetes.io/projected/b354f1ed-6feb-47b1-8592-fd95f15932f2-kube-api-access-zg89f\") pod \"calico-typha-74d596f9d9-2d7zt\" (UID: \"b354f1ed-6feb-47b1-8592-fd95f15932f2\") " pod="calico-system/calico-typha-74d596f9d9-2d7zt" Sep 12 00:18:13.545887 kubelet[2742]: I0912 00:18:13.545847 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b354f1ed-6feb-47b1-8592-fd95f15932f2-tigera-ca-bundle\") pod \"calico-typha-74d596f9d9-2d7zt\" (UID: \"b354f1ed-6feb-47b1-8592-fd95f15932f2\") " pod="calico-system/calico-typha-74d596f9d9-2d7zt" Sep 12 00:18:13.821423 systemd[1]: Created slice kubepods-besteffort-poddba5a3e7_0c27_44db_9a59_3f6b884a54fe.slice - libcontainer container kubepods-besteffort-poddba5a3e7_0c27_44db_9a59_3f6b884a54fe.slice. Sep 12 00:18:13.833429 kubelet[2742]: E0912 00:18:13.833382 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:13.834145 containerd[1593]: time="2025-09-12T00:18:13.834053374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74d596f9d9-2d7zt,Uid:b354f1ed-6feb-47b1-8592-fd95f15932f2,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:13.949060 kubelet[2742]: I0912 00:18:13.948987 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-lib-modules\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949060 kubelet[2742]: I0912 00:18:13.949050 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-var-run-calico\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949865 kubelet[2742]: I0912 00:18:13.949078 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-cni-bin-dir\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949865 kubelet[2742]: I0912 00:18:13.949228 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-cni-log-dir\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949865 kubelet[2742]: I0912 00:18:13.949298 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-cni-net-dir\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949865 kubelet[2742]: I0912 00:18:13.949348 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-xtables-lock\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949865 kubelet[2742]: I0912 00:18:13.949372 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqhhn\" (UniqueName: \"kubernetes.io/projected/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-kube-api-access-tqhhn\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949993 kubelet[2742]: I0912 00:18:13.949428 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-node-certs\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949993 kubelet[2742]: I0912 00:18:13.949455 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-policysync\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949993 kubelet[2742]: I0912 00:18:13.949481 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-var-lib-calico\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949993 kubelet[2742]: I0912 00:18:13.949519 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-flexvol-driver-host\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.949993 kubelet[2742]: I0912 00:18:13.949546 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dba5a3e7-0c27-44db-9a59-3f6b884a54fe-tigera-ca-bundle\") pod \"calico-node-7w2gt\" (UID: \"dba5a3e7-0c27-44db-9a59-3f6b884a54fe\") " pod="calico-system/calico-node-7w2gt" Sep 12 00:18:13.988473 containerd[1593]: time="2025-09-12T00:18:13.988404216Z" level=info msg="connecting to shim 5bb058121cbdf47c17a49bc1da503a1f2a8ae3f35de23363c2347f96c97b53ca" address="unix:///run/containerd/s/356726164993a9654320d4ce9ea91e1eb7b7cf1a7eb67298c647b450c61dd1e5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:14.014279 systemd[1]: Started cri-containerd-5bb058121cbdf47c17a49bc1da503a1f2a8ae3f35de23363c2347f96c97b53ca.scope - libcontainer container 5bb058121cbdf47c17a49bc1da503a1f2a8ae3f35de23363c2347f96c97b53ca. Sep 12 00:18:14.053473 kubelet[2742]: E0912 00:18:14.053356 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.053473 kubelet[2742]: W0912 00:18:14.053415 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.053773 kubelet[2742]: E0912 00:18:14.053711 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.062368 kubelet[2742]: E0912 00:18:14.062282 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.062368 kubelet[2742]: W0912 00:18:14.062304 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.062368 kubelet[2742]: E0912 00:18:14.062329 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.064722 kubelet[2742]: E0912 00:18:14.064673 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.064722 kubelet[2742]: W0912 00:18:14.064687 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.064722 kubelet[2742]: E0912 00:18:14.064699 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.083206 kubelet[2742]: E0912 00:18:14.081581 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:14.126356 containerd[1593]: time="2025-09-12T00:18:14.126309175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7w2gt,Uid:dba5a3e7-0c27-44db-9a59-3f6b884a54fe,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:14.150045 kubelet[2742]: E0912 00:18:14.149990 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.150045 kubelet[2742]: W0912 00:18:14.150019 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.150045 kubelet[2742]: E0912 00:18:14.150043 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.150345 kubelet[2742]: E0912 00:18:14.150276 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.150345 kubelet[2742]: W0912 00:18:14.150285 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.150345 kubelet[2742]: E0912 00:18:14.150293 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.150477 kubelet[2742]: E0912 00:18:14.150459 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.150477 kubelet[2742]: W0912 00:18:14.150470 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.150477 kubelet[2742]: E0912 00:18:14.150478 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.150723 kubelet[2742]: E0912 00:18:14.150695 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.150723 kubelet[2742]: W0912 00:18:14.150708 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.150723 kubelet[2742]: E0912 00:18:14.150716 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.151016 kubelet[2742]: E0912 00:18:14.150960 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.151016 kubelet[2742]: W0912 00:18:14.150996 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.151016 kubelet[2742]: E0912 00:18:14.151029 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.151331 kubelet[2742]: E0912 00:18:14.151308 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.151331 kubelet[2742]: W0912 00:18:14.151320 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.151331 kubelet[2742]: E0912 00:18:14.151329 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.151547 kubelet[2742]: E0912 00:18:14.151523 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.151547 kubelet[2742]: W0912 00:18:14.151537 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.151547 kubelet[2742]: E0912 00:18:14.151547 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.151777 kubelet[2742]: E0912 00:18:14.151754 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.151777 kubelet[2742]: W0912 00:18:14.151767 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.151777 kubelet[2742]: E0912 00:18:14.151776 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.152033 kubelet[2742]: E0912 00:18:14.151997 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.152033 kubelet[2742]: W0912 00:18:14.152021 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.152033 kubelet[2742]: E0912 00:18:14.152031 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.152270 kubelet[2742]: E0912 00:18:14.152249 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.152270 kubelet[2742]: W0912 00:18:14.152262 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.152349 kubelet[2742]: E0912 00:18:14.152273 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.152446 kubelet[2742]: E0912 00:18:14.152428 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.152446 kubelet[2742]: W0912 00:18:14.152438 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.152446 kubelet[2742]: E0912 00:18:14.152445 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.152610 kubelet[2742]: E0912 00:18:14.152592 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.152610 kubelet[2742]: W0912 00:18:14.152601 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.152610 kubelet[2742]: E0912 00:18:14.152608 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.152801 kubelet[2742]: E0912 00:18:14.152782 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.152801 kubelet[2742]: W0912 00:18:14.152791 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.152801 kubelet[2742]: E0912 00:18:14.152799 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.152975 kubelet[2742]: E0912 00:18:14.152956 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.152975 kubelet[2742]: W0912 00:18:14.152969 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.153047 kubelet[2742]: E0912 00:18:14.152979 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.153177 kubelet[2742]: E0912 00:18:14.153160 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.153177 kubelet[2742]: W0912 00:18:14.153172 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.153270 kubelet[2742]: E0912 00:18:14.153180 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.153341 kubelet[2742]: E0912 00:18:14.153324 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.153341 kubelet[2742]: W0912 00:18:14.153333 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.153341 kubelet[2742]: E0912 00:18:14.153341 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.153583 kubelet[2742]: E0912 00:18:14.153564 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.153583 kubelet[2742]: W0912 00:18:14.153574 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.153583 kubelet[2742]: E0912 00:18:14.153583 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.153751 kubelet[2742]: E0912 00:18:14.153733 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.153751 kubelet[2742]: W0912 00:18:14.153743 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.153751 kubelet[2742]: E0912 00:18:14.153750 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.153914 kubelet[2742]: E0912 00:18:14.153899 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.153914 kubelet[2742]: W0912 00:18:14.153907 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.153914 kubelet[2742]: E0912 00:18:14.153915 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.154082 kubelet[2742]: E0912 00:18:14.154063 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.154082 kubelet[2742]: W0912 00:18:14.154075 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.154165 kubelet[2742]: E0912 00:18:14.154085 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.154352 kubelet[2742]: E0912 00:18:14.154334 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.154352 kubelet[2742]: W0912 00:18:14.154344 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.154352 kubelet[2742]: E0912 00:18:14.154352 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.154450 kubelet[2742]: I0912 00:18:14.154374 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzbl\" (UniqueName: \"kubernetes.io/projected/5c37c94c-e43d-4388-9658-2398a6df2ea4-kube-api-access-4bzbl\") pod \"csi-node-driver-d2x6v\" (UID: \"5c37c94c-e43d-4388-9658-2398a6df2ea4\") " pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:14.154607 kubelet[2742]: E0912 00:18:14.154579 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.154607 kubelet[2742]: W0912 00:18:14.154598 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.154689 kubelet[2742]: E0912 00:18:14.154619 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.154689 kubelet[2742]: I0912 00:18:14.154657 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5c37c94c-e43d-4388-9658-2398a6df2ea4-registration-dir\") pod \"csi-node-driver-d2x6v\" (UID: \"5c37c94c-e43d-4388-9658-2398a6df2ea4\") " pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:14.154879 kubelet[2742]: E0912 00:18:14.154856 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.154879 kubelet[2742]: W0912 00:18:14.154868 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.154951 kubelet[2742]: E0912 00:18:14.154883 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.154951 kubelet[2742]: I0912 00:18:14.154897 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5c37c94c-e43d-4388-9658-2398a6df2ea4-varrun\") pod \"csi-node-driver-d2x6v\" (UID: \"5c37c94c-e43d-4388-9658-2398a6df2ea4\") " pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:14.155168 kubelet[2742]: E0912 00:18:14.155136 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.155168 kubelet[2742]: W0912 00:18:14.155157 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.155398 kubelet[2742]: E0912 00:18:14.155180 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.155398 kubelet[2742]: I0912 00:18:14.155206 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c37c94c-e43d-4388-9658-2398a6df2ea4-kubelet-dir\") pod \"csi-node-driver-d2x6v\" (UID: \"5c37c94c-e43d-4388-9658-2398a6df2ea4\") " pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:14.155459 kubelet[2742]: E0912 00:18:14.155418 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.155459 kubelet[2742]: W0912 00:18:14.155432 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.155459 kubelet[2742]: E0912 00:18:14.155447 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.155533 kubelet[2742]: I0912 00:18:14.155465 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5c37c94c-e43d-4388-9658-2398a6df2ea4-socket-dir\") pod \"csi-node-driver-d2x6v\" (UID: \"5c37c94c-e43d-4388-9658-2398a6df2ea4\") " pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:14.155696 kubelet[2742]: E0912 00:18:14.155676 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.155696 kubelet[2742]: W0912 00:18:14.155692 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.155758 kubelet[2742]: E0912 00:18:14.155734 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.155888 kubelet[2742]: E0912 00:18:14.155870 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.155888 kubelet[2742]: W0912 00:18:14.155884 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.155951 kubelet[2742]: E0912 00:18:14.155908 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.156094 kubelet[2742]: E0912 00:18:14.156071 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.156094 kubelet[2742]: W0912 00:18:14.156085 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.156230 kubelet[2742]: E0912 00:18:14.156143 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.156327 kubelet[2742]: E0912 00:18:14.156310 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.156327 kubelet[2742]: W0912 00:18:14.156322 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.156418 kubelet[2742]: E0912 00:18:14.156356 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.156528 kubelet[2742]: E0912 00:18:14.156507 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.156528 kubelet[2742]: W0912 00:18:14.156520 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.156610 kubelet[2742]: E0912 00:18:14.156547 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.156708 kubelet[2742]: E0912 00:18:14.156691 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.156708 kubelet[2742]: W0912 00:18:14.156704 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.156763 kubelet[2742]: E0912 00:18:14.156715 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.156917 kubelet[2742]: E0912 00:18:14.156899 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.156917 kubelet[2742]: W0912 00:18:14.156913 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.156975 kubelet[2742]: E0912 00:18:14.156924 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.157148 kubelet[2742]: E0912 00:18:14.157091 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.157148 kubelet[2742]: W0912 00:18:14.157129 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.157148 kubelet[2742]: E0912 00:18:14.157139 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.157345 kubelet[2742]: E0912 00:18:14.157312 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.157345 kubelet[2742]: W0912 00:18:14.157333 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.157345 kubelet[2742]: E0912 00:18:14.157344 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.157534 kubelet[2742]: E0912 00:18:14.157514 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.157534 kubelet[2742]: W0912 00:18:14.157527 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.157596 kubelet[2742]: E0912 00:18:14.157537 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.208496 containerd[1593]: time="2025-09-12T00:18:14.208351013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74d596f9d9-2d7zt,Uid:b354f1ed-6feb-47b1-8592-fd95f15932f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bb058121cbdf47c17a49bc1da503a1f2a8ae3f35de23363c2347f96c97b53ca\"" Sep 12 00:18:14.209929 kubelet[2742]: E0912 00:18:14.209895 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:14.212759 containerd[1593]: time="2025-09-12T00:18:14.212722730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 00:18:14.242144 containerd[1593]: time="2025-09-12T00:18:14.241335062Z" level=info msg="connecting to shim 7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d" address="unix:///run/containerd/s/7ac9a141ff47592a60ac452d0052c0fae911cd18d704f95262c615264f6eeaef" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:14.256295 kubelet[2742]: E0912 00:18:14.256255 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.256678 kubelet[2742]: W0912 00:18:14.256567 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.256946 kubelet[2742]: E0912 00:18:14.256896 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.259667 kubelet[2742]: E0912 00:18:14.259644 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.259667 kubelet[2742]: W0912 00:18:14.259664 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.260043 kubelet[2742]: E0912 00:18:14.260019 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.261504 kubelet[2742]: E0912 00:18:14.261458 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.261504 kubelet[2742]: W0912 00:18:14.261484 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.261504 kubelet[2742]: E0912 00:18:14.261503 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.261837 kubelet[2742]: E0912 00:18:14.261798 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.261837 kubelet[2742]: W0912 00:18:14.261815 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.261947 kubelet[2742]: E0912 00:18:14.261874 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.262266 kubelet[2742]: E0912 00:18:14.262244 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.262266 kubelet[2742]: W0912 00:18:14.262261 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.262391 kubelet[2742]: E0912 00:18:14.262329 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.263572 kubelet[2742]: E0912 00:18:14.263527 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.263572 kubelet[2742]: W0912 00:18:14.263556 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.263994 kubelet[2742]: E0912 00:18:14.263940 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.264919 kubelet[2742]: E0912 00:18:14.264892 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.264919 kubelet[2742]: W0912 00:18:14.264916 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.265838 kubelet[2742]: E0912 00:18:14.265814 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.267145 kubelet[2742]: E0912 00:18:14.267120 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.267145 kubelet[2742]: W0912 00:18:14.267139 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.267340 kubelet[2742]: E0912 00:18:14.267170 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.268083 kubelet[2742]: E0912 00:18:14.268061 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.268083 kubelet[2742]: W0912 00:18:14.268081 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.270129 kubelet[2742]: E0912 00:18:14.269167 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.270647 kubelet[2742]: E0912 00:18:14.270591 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.270647 kubelet[2742]: W0912 00:18:14.270640 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.270798 kubelet[2742]: E0912 00:18:14.270770 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.271197 kubelet[2742]: E0912 00:18:14.271170 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.271197 kubelet[2742]: W0912 00:18:14.271187 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.271695 kubelet[2742]: E0912 00:18:14.271657 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.273278 kubelet[2742]: E0912 00:18:14.273250 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.273278 kubelet[2742]: W0912 00:18:14.273267 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.273377 kubelet[2742]: E0912 00:18:14.273333 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.275076 kubelet[2742]: E0912 00:18:14.275026 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.275076 kubelet[2742]: W0912 00:18:14.275047 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.276415 kubelet[2742]: E0912 00:18:14.275176 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.276415 kubelet[2742]: E0912 00:18:14.276383 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.276415 kubelet[2742]: W0912 00:18:14.276414 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.276534 kubelet[2742]: E0912 00:18:14.276490 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.278235 kubelet[2742]: E0912 00:18:14.278212 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.278235 kubelet[2742]: W0912 00:18:14.278226 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.278457 kubelet[2742]: E0912 00:18:14.278424 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.279934 kubelet[2742]: E0912 00:18:14.279900 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.279934 kubelet[2742]: W0912 00:18:14.279917 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.280219 kubelet[2742]: E0912 00:18:14.280193 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.280651 kubelet[2742]: E0912 00:18:14.280614 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.280702 kubelet[2742]: W0912 00:18:14.280648 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.280792 kubelet[2742]: E0912 00:18:14.280754 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.281403 kubelet[2742]: E0912 00:18:14.281308 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.281403 kubelet[2742]: W0912 00:18:14.281351 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.281403 kubelet[2742]: E0912 00:18:14.281372 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.282007 kubelet[2742]: E0912 00:18:14.281972 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.282007 kubelet[2742]: W0912 00:18:14.281986 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.282330 kubelet[2742]: E0912 00:18:14.282297 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.282695 kubelet[2742]: E0912 00:18:14.282664 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.282793 kubelet[2742]: W0912 00:18:14.282713 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.282836 kubelet[2742]: E0912 00:18:14.282825 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.283359 kubelet[2742]: E0912 00:18:14.283337 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.283359 kubelet[2742]: W0912 00:18:14.283353 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.283459 kubelet[2742]: E0912 00:18:14.283442 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.283738 kubelet[2742]: E0912 00:18:14.283711 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.283738 kubelet[2742]: W0912 00:18:14.283731 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.283843 kubelet[2742]: E0912 00:18:14.283772 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.284017 kubelet[2742]: E0912 00:18:14.283997 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.284017 kubelet[2742]: W0912 00:18:14.284009 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.284110 kubelet[2742]: E0912 00:18:14.284053 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.284413 kubelet[2742]: E0912 00:18:14.284378 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.284413 kubelet[2742]: W0912 00:18:14.284389 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.284413 kubelet[2742]: E0912 00:18:14.284399 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.284654 kubelet[2742]: E0912 00:18:14.284590 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.284654 kubelet[2742]: W0912 00:18:14.284639 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.284654 kubelet[2742]: E0912 00:18:14.284650 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.285186 kubelet[2742]: E0912 00:18:14.285132 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:14.285186 kubelet[2742]: W0912 00:18:14.285151 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:14.285186 kubelet[2742]: E0912 00:18:14.285161 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:14.285460 systemd[1]: Started cri-containerd-7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d.scope - libcontainer container 7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d. Sep 12 00:18:14.314706 containerd[1593]: time="2025-09-12T00:18:14.314655408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7w2gt,Uid:dba5a3e7-0c27-44db-9a59-3f6b884a54fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d\"" Sep 12 00:18:15.794954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609656574.mount: Deactivated successfully. Sep 12 00:18:16.197843 kubelet[2742]: E0912 00:18:16.197794 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:16.223030 containerd[1593]: time="2025-09-12T00:18:16.222984598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:16.223841 containerd[1593]: time="2025-09-12T00:18:16.223811693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 00:18:16.225463 containerd[1593]: time="2025-09-12T00:18:16.225425705Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:16.227658 containerd[1593]: time="2025-09-12T00:18:16.227628320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:16.228143 containerd[1593]: time="2025-09-12T00:18:16.228122065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.015358006s" Sep 12 00:18:16.228186 containerd[1593]: time="2025-09-12T00:18:16.228147773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 00:18:16.228942 containerd[1593]: time="2025-09-12T00:18:16.228912720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 00:18:16.236983 containerd[1593]: time="2025-09-12T00:18:16.236936073Z" level=info msg="CreateContainer within sandbox \"5bb058121cbdf47c17a49bc1da503a1f2a8ae3f35de23363c2347f96c97b53ca\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 00:18:16.247041 containerd[1593]: time="2025-09-12T00:18:16.246251850Z" level=info msg="Container 3bdab858139984e04940b1f3192fc531857a9ba589eef10ac4cdac109e555641: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:16.255719 containerd[1593]: time="2025-09-12T00:18:16.255668198Z" level=info msg="CreateContainer within sandbox \"5bb058121cbdf47c17a49bc1da503a1f2a8ae3f35de23363c2347f96c97b53ca\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3bdab858139984e04940b1f3192fc531857a9ba589eef10ac4cdac109e555641\"" Sep 12 00:18:16.256410 containerd[1593]: time="2025-09-12T00:18:16.256379624Z" level=info msg="StartContainer for \"3bdab858139984e04940b1f3192fc531857a9ba589eef10ac4cdac109e555641\"" Sep 12 00:18:16.257717 containerd[1593]: time="2025-09-12T00:18:16.257689821Z" level=info msg="connecting to shim 3bdab858139984e04940b1f3192fc531857a9ba589eef10ac4cdac109e555641" address="unix:///run/containerd/s/356726164993a9654320d4ce9ea91e1eb7b7cf1a7eb67298c647b450c61dd1e5" protocol=ttrpc version=3 Sep 12 00:18:16.280515 systemd[1]: Started cri-containerd-3bdab858139984e04940b1f3192fc531857a9ba589eef10ac4cdac109e555641.scope - libcontainer container 3bdab858139984e04940b1f3192fc531857a9ba589eef10ac4cdac109e555641. Sep 12 00:18:16.340163 containerd[1593]: time="2025-09-12T00:18:16.340121904Z" level=info msg="StartContainer for \"3bdab858139984e04940b1f3192fc531857a9ba589eef10ac4cdac109e555641\" returns successfully" Sep 12 00:18:17.274383 kubelet[2742]: E0912 00:18:17.274344 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:17.276718 kubelet[2742]: E0912 00:18:17.276690 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.276718 kubelet[2742]: W0912 00:18:17.276716 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.276797 kubelet[2742]: E0912 00:18:17.276745 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.276988 kubelet[2742]: E0912 00:18:17.276962 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.276988 kubelet[2742]: W0912 00:18:17.276978 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.277049 kubelet[2742]: E0912 00:18:17.276989 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.277215 kubelet[2742]: E0912 00:18:17.277197 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.277215 kubelet[2742]: W0912 00:18:17.277212 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.277268 kubelet[2742]: E0912 00:18:17.277223 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.277504 kubelet[2742]: E0912 00:18:17.277478 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.277504 kubelet[2742]: W0912 00:18:17.277494 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.277558 kubelet[2742]: E0912 00:18:17.277505 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.277747 kubelet[2742]: E0912 00:18:17.277725 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.277747 kubelet[2742]: W0912 00:18:17.277744 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.277798 kubelet[2742]: E0912 00:18:17.277756 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.277961 kubelet[2742]: E0912 00:18:17.277944 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.277961 kubelet[2742]: W0912 00:18:17.277957 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.278011 kubelet[2742]: E0912 00:18:17.277968 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.278201 kubelet[2742]: E0912 00:18:17.278173 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.278201 kubelet[2742]: W0912 00:18:17.278187 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.278201 kubelet[2742]: E0912 00:18:17.278198 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.278414 kubelet[2742]: E0912 00:18:17.278397 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.278414 kubelet[2742]: W0912 00:18:17.278411 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.278468 kubelet[2742]: E0912 00:18:17.278422 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.278651 kubelet[2742]: E0912 00:18:17.278633 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.278651 kubelet[2742]: W0912 00:18:17.278647 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.278701 kubelet[2742]: E0912 00:18:17.278660 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.278871 kubelet[2742]: E0912 00:18:17.278854 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.278871 kubelet[2742]: W0912 00:18:17.278867 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.278920 kubelet[2742]: E0912 00:18:17.278878 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.279088 kubelet[2742]: E0912 00:18:17.279071 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.279088 kubelet[2742]: W0912 00:18:17.279085 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.279152 kubelet[2742]: E0912 00:18:17.279116 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.279323 kubelet[2742]: E0912 00:18:17.279306 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.279323 kubelet[2742]: W0912 00:18:17.279319 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.279375 kubelet[2742]: E0912 00:18:17.279331 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.279583 kubelet[2742]: E0912 00:18:17.279563 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.279583 kubelet[2742]: W0912 00:18:17.279579 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.279646 kubelet[2742]: E0912 00:18:17.279602 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.279809 kubelet[2742]: E0912 00:18:17.279793 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.279809 kubelet[2742]: W0912 00:18:17.279805 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.279864 kubelet[2742]: E0912 00:18:17.279814 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.280024 kubelet[2742]: E0912 00:18:17.280007 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.280024 kubelet[2742]: W0912 00:18:17.280020 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.280073 kubelet[2742]: E0912 00:18:17.280030 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.295174 kubelet[2742]: E0912 00:18:17.295140 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.295174 kubelet[2742]: W0912 00:18:17.295164 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.295337 kubelet[2742]: E0912 00:18:17.295186 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.295441 kubelet[2742]: E0912 00:18:17.295421 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.295441 kubelet[2742]: W0912 00:18:17.295435 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.295506 kubelet[2742]: E0912 00:18:17.295452 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.295738 kubelet[2742]: E0912 00:18:17.295704 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.295738 kubelet[2742]: W0912 00:18:17.295725 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.295824 kubelet[2742]: E0912 00:18:17.295750 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.295980 kubelet[2742]: E0912 00:18:17.295960 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.295980 kubelet[2742]: W0912 00:18:17.295976 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.296051 kubelet[2742]: E0912 00:18:17.295993 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.296224 kubelet[2742]: E0912 00:18:17.296205 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.296224 kubelet[2742]: W0912 00:18:17.296219 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.296315 kubelet[2742]: E0912 00:18:17.296236 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.296506 kubelet[2742]: E0912 00:18:17.296485 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.296506 kubelet[2742]: W0912 00:18:17.296499 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.296586 kubelet[2742]: E0912 00:18:17.296517 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.296870 kubelet[2742]: E0912 00:18:17.296844 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.296918 kubelet[2742]: W0912 00:18:17.296870 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.296918 kubelet[2742]: E0912 00:18:17.296895 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.297226 kubelet[2742]: E0912 00:18:17.297206 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.297226 kubelet[2742]: W0912 00:18:17.297218 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.297316 kubelet[2742]: E0912 00:18:17.297257 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.297458 kubelet[2742]: E0912 00:18:17.297437 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.297458 kubelet[2742]: W0912 00:18:17.297451 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.297545 kubelet[2742]: E0912 00:18:17.297488 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.297757 kubelet[2742]: E0912 00:18:17.297710 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.297757 kubelet[2742]: W0912 00:18:17.297727 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.297757 kubelet[2742]: E0912 00:18:17.297747 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.297988 kubelet[2742]: E0912 00:18:17.297947 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.297988 kubelet[2742]: W0912 00:18:17.297958 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.297988 kubelet[2742]: E0912 00:18:17.297974 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.298245 kubelet[2742]: E0912 00:18:17.298223 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.298245 kubelet[2742]: W0912 00:18:17.298240 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.298334 kubelet[2742]: E0912 00:18:17.298261 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.298503 kubelet[2742]: E0912 00:18:17.298485 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.298503 kubelet[2742]: W0912 00:18:17.298499 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.298576 kubelet[2742]: E0912 00:18:17.298517 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.298845 kubelet[2742]: E0912 00:18:17.298829 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.298881 kubelet[2742]: W0912 00:18:17.298844 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.298881 kubelet[2742]: E0912 00:18:17.298863 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.299063 kubelet[2742]: E0912 00:18:17.299050 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.299110 kubelet[2742]: W0912 00:18:17.299062 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.299110 kubelet[2742]: E0912 00:18:17.299078 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.299452 kubelet[2742]: E0912 00:18:17.299438 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.299452 kubelet[2742]: W0912 00:18:17.299451 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.299505 kubelet[2742]: E0912 00:18:17.299469 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.299784 kubelet[2742]: E0912 00:18:17.299765 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.299784 kubelet[2742]: W0912 00:18:17.299781 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.299851 kubelet[2742]: E0912 00:18:17.299799 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.300039 kubelet[2742]: E0912 00:18:17.300022 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:18:17.300039 kubelet[2742]: W0912 00:18:17.300036 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:18:17.300082 kubelet[2742]: E0912 00:18:17.300047 2742 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:18:17.904193 containerd[1593]: time="2025-09-12T00:18:17.904114215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:17.905031 containerd[1593]: time="2025-09-12T00:18:17.904993227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 00:18:17.906313 containerd[1593]: time="2025-09-12T00:18:17.906260962Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:17.908227 containerd[1593]: time="2025-09-12T00:18:17.908188746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:17.908790 containerd[1593]: time="2025-09-12T00:18:17.908751831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.679808503s" Sep 12 00:18:17.908790 containerd[1593]: time="2025-09-12T00:18:17.908787699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 00:18:17.910931 containerd[1593]: time="2025-09-12T00:18:17.910771728Z" level=info msg="CreateContainer within sandbox \"7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 00:18:17.921725 containerd[1593]: time="2025-09-12T00:18:17.921653888Z" level=info msg="Container aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:17.935279 containerd[1593]: time="2025-09-12T00:18:17.935222145Z" level=info msg="CreateContainer within sandbox \"7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55\"" Sep 12 00:18:17.935965 containerd[1593]: time="2025-09-12T00:18:17.935916627Z" level=info msg="StartContainer for \"aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55\"" Sep 12 00:18:17.937869 containerd[1593]: time="2025-09-12T00:18:17.937842668Z" level=info msg="connecting to shim aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55" address="unix:///run/containerd/s/7ac9a141ff47592a60ac452d0052c0fae911cd18d704f95262c615264f6eeaef" protocol=ttrpc version=3 Sep 12 00:18:17.968302 systemd[1]: Started cri-containerd-aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55.scope - libcontainer container aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55. Sep 12 00:18:18.031707 systemd[1]: cri-containerd-aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55.scope: Deactivated successfully. Sep 12 00:18:18.034727 containerd[1593]: time="2025-09-12T00:18:18.034685440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55\" id:\"aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55\" pid:3439 exited_at:{seconds:1757636298 nanos:34069475}" Sep 12 00:18:18.198111 kubelet[2742]: E0912 00:18:18.197942 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:18.460799 containerd[1593]: time="2025-09-12T00:18:18.460653347Z" level=info msg="received exit event container_id:\"aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55\" id:\"aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55\" pid:3439 exited_at:{seconds:1757636298 nanos:34069475}" Sep 12 00:18:18.463985 containerd[1593]: time="2025-09-12T00:18:18.463551392Z" level=info msg="StartContainer for \"aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55\" returns successfully" Sep 12 00:18:18.465286 kubelet[2742]: I0912 00:18:18.465257 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:18:18.465648 kubelet[2742]: E0912 00:18:18.465571 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:18.495334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa12fe28c9d3b0de2fd7d18bc0c6f58a067851430033cb321d2c1df8fc045d55-rootfs.mount: Deactivated successfully. Sep 12 00:18:19.471132 containerd[1593]: time="2025-09-12T00:18:19.470471008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 00:18:19.484136 kubelet[2742]: I0912 00:18:19.483808 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74d596f9d9-2d7zt" podStartSLOduration=4.467215295 podStartE2EDuration="6.483743203s" podCreationTimestamp="2025-09-12 00:18:13 +0000 UTC" firstStartedPulling="2025-09-12 00:18:14.212282187 +0000 UTC m=+19.117845079" lastFinishedPulling="2025-09-12 00:18:16.228810095 +0000 UTC m=+21.134372987" observedRunningTime="2025-09-12 00:18:17.579053596 +0000 UTC m=+22.484616488" watchObservedRunningTime="2025-09-12 00:18:19.483743203 +0000 UTC m=+24.389306095" Sep 12 00:18:20.197705 kubelet[2742]: E0912 00:18:20.197626 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:20.767136 kubelet[2742]: I0912 00:18:20.766183 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:18:20.767136 kubelet[2742]: E0912 00:18:20.766600 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:21.472958 kubelet[2742]: E0912 00:18:21.472921 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:22.198025 kubelet[2742]: E0912 00:18:22.197966 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:24.197336 kubelet[2742]: E0912 00:18:24.197264 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:25.571702 containerd[1593]: time="2025-09-12T00:18:25.571617019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:25.594965 containerd[1593]: time="2025-09-12T00:18:25.594870457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 00:18:25.682508 containerd[1593]: time="2025-09-12T00:18:25.682421324Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:25.879682 containerd[1593]: time="2025-09-12T00:18:25.879577326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:25.880616 containerd[1593]: time="2025-09-12T00:18:25.880568773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.410040887s" Sep 12 00:18:25.880699 containerd[1593]: time="2025-09-12T00:18:25.880620822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 00:18:25.883924 containerd[1593]: time="2025-09-12T00:18:25.883871457Z" level=info msg="CreateContainer within sandbox \"7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 00:18:25.923844 containerd[1593]: time="2025-09-12T00:18:25.923765852Z" level=info msg="Container 27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:25.936793 containerd[1593]: time="2025-09-12T00:18:25.936719262Z" level=info msg="CreateContainer within sandbox \"7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de\"" Sep 12 00:18:25.937347 containerd[1593]: time="2025-09-12T00:18:25.937303112Z" level=info msg="StartContainer for \"27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de\"" Sep 12 00:18:25.939078 containerd[1593]: time="2025-09-12T00:18:25.939044203Z" level=info msg="connecting to shim 27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de" address="unix:///run/containerd/s/7ac9a141ff47592a60ac452d0052c0fae911cd18d704f95262c615264f6eeaef" protocol=ttrpc version=3 Sep 12 00:18:25.969447 systemd[1]: Started cri-containerd-27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de.scope - libcontainer container 27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de. Sep 12 00:18:26.197745 kubelet[2742]: E0912 00:18:26.197542 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:26.773348 containerd[1593]: time="2025-09-12T00:18:26.773294259Z" level=info msg="StartContainer for \"27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de\" returns successfully" Sep 12 00:18:28.198319 kubelet[2742]: E0912 00:18:28.198242 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:29.395751 systemd[1]: cri-containerd-27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de.scope: Deactivated successfully. Sep 12 00:18:29.396247 systemd[1]: cri-containerd-27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de.scope: Consumed 664ms CPU time, 180.8M memory peak, 276K read from disk, 171.3M written to disk. Sep 12 00:18:29.396938 containerd[1593]: time="2025-09-12T00:18:29.396901522Z" level=info msg="received exit event container_id:\"27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de\" id:\"27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de\" pid:3501 exited_at:{seconds:1757636309 nanos:396440553}" Sep 12 00:18:29.397488 containerd[1593]: time="2025-09-12T00:18:29.397054570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de\" id:\"27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de\" pid:3501 exited_at:{seconds:1757636309 nanos:396440553}" Sep 12 00:18:29.424831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27b19a5f82688e1145cbb89b675ece68dd3a986ac1151414640b7fe5d95753de-rootfs.mount: Deactivated successfully. Sep 12 00:18:29.465757 kubelet[2742]: I0912 00:18:29.465709 2742 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 00:18:29.930200 systemd[1]: Created slice kubepods-besteffort-pode916b2ca_39a3_4524_af0b_67438570f595.slice - libcontainer container kubepods-besteffort-pode916b2ca_39a3_4524_af0b_67438570f595.slice. Sep 12 00:18:29.950333 systemd[1]: Created slice kubepods-burstable-pod98a4da3c_812d_46bd_ae9f_8908fc1692b4.slice - libcontainer container kubepods-burstable-pod98a4da3c_812d_46bd_ae9f_8908fc1692b4.slice. Sep 12 00:18:29.960287 systemd[1]: Created slice kubepods-burstable-pod85071725_7a62_4ac1_91ba_a54cc8e19425.slice - libcontainer container kubepods-burstable-pod85071725_7a62_4ac1_91ba_a54cc8e19425.slice. Sep 12 00:18:29.972781 systemd[1]: Created slice kubepods-besteffort-pod1c15d419_f4da_4b74_81c5_c34a123d9cc5.slice - libcontainer container kubepods-besteffort-pod1c15d419_f4da_4b74_81c5_c34a123d9cc5.slice. Sep 12 00:18:29.979717 kubelet[2742]: I0912 00:18:29.979670 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mznd\" (UniqueName: \"kubernetes.io/projected/54ccc286-153d-466d-bd25-f10ab1e1e2cc-kube-api-access-5mznd\") pod \"whisker-54bb4bb869-khzms\" (UID: \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\") " pod="calico-system/whisker-54bb4bb869-khzms" Sep 12 00:18:29.979717 kubelet[2742]: I0912 00:18:29.979717 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94zhp\" (UniqueName: \"kubernetes.io/projected/4f52ff49-8439-42a5-9dd9-7564715fa3b0-kube-api-access-94zhp\") pod \"calico-apiserver-7966cf8c7-k2jlv\" (UID: \"4f52ff49-8439-42a5-9dd9-7564715fa3b0\") " pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" Sep 12 00:18:29.979887 kubelet[2742]: I0912 00:18:29.979741 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxlvn\" (UniqueName: \"kubernetes.io/projected/1c15d419-f4da-4b74-81c5-c34a123d9cc5-kube-api-access-sxlvn\") pod \"calico-apiserver-7966cf8c7-vgq56\" (UID: \"1c15d419-f4da-4b74-81c5-c34a123d9cc5\") " pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" Sep 12 00:18:29.979887 kubelet[2742]: I0912 00:18:29.979762 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e62d2f6-3470-41e9-9111-e097821131f8-config\") pod \"goldmane-54d579b49d-ppnzq\" (UID: \"5e62d2f6-3470-41e9-9111-e097821131f8\") " pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:29.979887 kubelet[2742]: I0912 00:18:29.979782 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5e62d2f6-3470-41e9-9111-e097821131f8-goldmane-key-pair\") pod \"goldmane-54d579b49d-ppnzq\" (UID: \"5e62d2f6-3470-41e9-9111-e097821131f8\") " pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:29.979887 kubelet[2742]: I0912 00:18:29.979796 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-backend-key-pair\") pod \"whisker-54bb4bb869-khzms\" (UID: \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\") " pod="calico-system/whisker-54bb4bb869-khzms" Sep 12 00:18:29.979887 kubelet[2742]: I0912 00:18:29.979813 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9hn\" (UniqueName: \"kubernetes.io/projected/85071725-7a62-4ac1-91ba-a54cc8e19425-kube-api-access-fz9hn\") pod \"coredns-668d6bf9bc-4lpnl\" (UID: \"85071725-7a62-4ac1-91ba-a54cc8e19425\") " pod="kube-system/coredns-668d6bf9bc-4lpnl" Sep 12 00:18:29.980023 kubelet[2742]: I0912 00:18:29.979828 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crrm6\" (UniqueName: \"kubernetes.io/projected/98a4da3c-812d-46bd-ae9f-8908fc1692b4-kube-api-access-crrm6\") pod \"coredns-668d6bf9bc-n4r8r\" (UID: \"98a4da3c-812d-46bd-ae9f-8908fc1692b4\") " pod="kube-system/coredns-668d6bf9bc-n4r8r" Sep 12 00:18:29.980023 kubelet[2742]: I0912 00:18:29.979843 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e916b2ca-39a3-4524-af0b-67438570f595-tigera-ca-bundle\") pod \"calico-kube-controllers-d9cb6fbf4-9jgs9\" (UID: \"e916b2ca-39a3-4524-af0b-67438570f595\") " pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" Sep 12 00:18:29.980023 kubelet[2742]: I0912 00:18:29.979859 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e62d2f6-3470-41e9-9111-e097821131f8-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-ppnzq\" (UID: \"5e62d2f6-3470-41e9-9111-e097821131f8\") " pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:29.980023 kubelet[2742]: I0912 00:18:29.979872 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpmsk\" (UniqueName: \"kubernetes.io/projected/5e62d2f6-3470-41e9-9111-e097821131f8-kube-api-access-rpmsk\") pod \"goldmane-54d579b49d-ppnzq\" (UID: \"5e62d2f6-3470-41e9-9111-e097821131f8\") " pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:29.980023 kubelet[2742]: I0912 00:18:29.979886 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-ca-bundle\") pod \"whisker-54bb4bb869-khzms\" (UID: \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\") " pod="calico-system/whisker-54bb4bb869-khzms" Sep 12 00:18:29.980172 kubelet[2742]: I0912 00:18:29.979903 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2xp\" (UniqueName: \"kubernetes.io/projected/e916b2ca-39a3-4524-af0b-67438570f595-kube-api-access-2d2xp\") pod \"calico-kube-controllers-d9cb6fbf4-9jgs9\" (UID: \"e916b2ca-39a3-4524-af0b-67438570f595\") " pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" Sep 12 00:18:29.980172 kubelet[2742]: I0912 00:18:29.979952 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85071725-7a62-4ac1-91ba-a54cc8e19425-config-volume\") pod \"coredns-668d6bf9bc-4lpnl\" (UID: \"85071725-7a62-4ac1-91ba-a54cc8e19425\") " pod="kube-system/coredns-668d6bf9bc-4lpnl" Sep 12 00:18:29.980172 kubelet[2742]: I0912 00:18:29.979969 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a4da3c-812d-46bd-ae9f-8908fc1692b4-config-volume\") pod \"coredns-668d6bf9bc-n4r8r\" (UID: \"98a4da3c-812d-46bd-ae9f-8908fc1692b4\") " pod="kube-system/coredns-668d6bf9bc-n4r8r" Sep 12 00:18:29.980172 kubelet[2742]: I0912 00:18:29.979988 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c15d419-f4da-4b74-81c5-c34a123d9cc5-calico-apiserver-certs\") pod \"calico-apiserver-7966cf8c7-vgq56\" (UID: \"1c15d419-f4da-4b74-81c5-c34a123d9cc5\") " pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" Sep 12 00:18:29.980172 kubelet[2742]: I0912 00:18:29.980005 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f52ff49-8439-42a5-9dd9-7564715fa3b0-calico-apiserver-certs\") pod \"calico-apiserver-7966cf8c7-k2jlv\" (UID: \"4f52ff49-8439-42a5-9dd9-7564715fa3b0\") " pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" Sep 12 00:18:29.987343 systemd[1]: Created slice kubepods-besteffort-pod5e62d2f6_3470_41e9_9111_e097821131f8.slice - libcontainer container kubepods-besteffort-pod5e62d2f6_3470_41e9_9111_e097821131f8.slice. Sep 12 00:18:30.001452 systemd[1]: Created slice kubepods-besteffort-pod4f52ff49_8439_42a5_9dd9_7564715fa3b0.slice - libcontainer container kubepods-besteffort-pod4f52ff49_8439_42a5_9dd9_7564715fa3b0.slice. Sep 12 00:18:30.011309 systemd[1]: Created slice kubepods-besteffort-pod54ccc286_153d_466d_bd25_f10ab1e1e2cc.slice - libcontainer container kubepods-besteffort-pod54ccc286_153d_466d_bd25_f10ab1e1e2cc.slice. Sep 12 00:18:30.204191 systemd[1]: Created slice kubepods-besteffort-pod5c37c94c_e43d_4388_9658_2398a6df2ea4.slice - libcontainer container kubepods-besteffort-pod5c37c94c_e43d_4388_9658_2398a6df2ea4.slice. Sep 12 00:18:30.207261 containerd[1593]: time="2025-09-12T00:18:30.207216433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2x6v,Uid:5c37c94c-e43d-4388-9658-2398a6df2ea4,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:30.248718 containerd[1593]: time="2025-09-12T00:18:30.248385131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9cb6fbf4-9jgs9,Uid:e916b2ca-39a3-4524-af0b-67438570f595,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:30.258338 kubelet[2742]: E0912 00:18:30.258258 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:30.259517 containerd[1593]: time="2025-09-12T00:18:30.259460405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4r8r,Uid:98a4da3c-812d-46bd-ae9f-8908fc1692b4,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:30.268777 kubelet[2742]: E0912 00:18:30.268408 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:30.269688 containerd[1593]: time="2025-09-12T00:18:30.269633050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lpnl,Uid:85071725-7a62-4ac1-91ba-a54cc8e19425,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:30.284413 containerd[1593]: time="2025-09-12T00:18:30.284330309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-vgq56,Uid:1c15d419-f4da-4b74-81c5-c34a123d9cc5,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:30.300628 containerd[1593]: time="2025-09-12T00:18:30.300554102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ppnzq,Uid:5e62d2f6-3470-41e9-9111-e097821131f8,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:30.310133 containerd[1593]: time="2025-09-12T00:18:30.309729891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-k2jlv,Uid:4f52ff49-8439-42a5-9dd9-7564715fa3b0,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:30.318975 containerd[1593]: time="2025-09-12T00:18:30.318912132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bb4bb869-khzms,Uid:54ccc286-153d-466d-bd25-f10ab1e1e2cc,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:30.376045 containerd[1593]: time="2025-09-12T00:18:30.375924046Z" level=error msg="Failed to destroy network for sandbox \"6b87204a3cdd4bcce10db56eb4d9368ff073317eb30711974fe490b801bc86b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.386722 containerd[1593]: time="2025-09-12T00:18:30.386558791Z" level=error msg="Failed to destroy network for sandbox \"f93592ba31a0f5318c3b9f8cf042edca6c246bcc5a4f62a537890a3ea5621271\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.391347 containerd[1593]: time="2025-09-12T00:18:30.391277871Z" level=error msg="Failed to destroy network for sandbox \"652cb1357dcb95cbfec03eda32e6eab57a4f4da33bfb00770faa8dfcd74b736b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.397591 containerd[1593]: time="2025-09-12T00:18:30.397507926Z" level=error msg="Failed to destroy network for sandbox \"4915a1d96249a02c47732020c4c70a6c260bf9a19ba671f14ee87eae0c949c6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.411137 containerd[1593]: time="2025-09-12T00:18:30.410910630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lpnl,Uid:85071725-7a62-4ac1-91ba-a54cc8e19425,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b87204a3cdd4bcce10db56eb4d9368ff073317eb30711974fe490b801bc86b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.412280 containerd[1593]: time="2025-09-12T00:18:30.412205327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2x6v,Uid:5c37c94c-e43d-4388-9658-2398a6df2ea4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"652cb1357dcb95cbfec03eda32e6eab57a4f4da33bfb00770faa8dfcd74b736b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.412658 containerd[1593]: time="2025-09-12T00:18:30.412564012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9cb6fbf4-9jgs9,Uid:e916b2ca-39a3-4524-af0b-67438570f595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93592ba31a0f5318c3b9f8cf042edca6c246bcc5a4f62a537890a3ea5621271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.413390 containerd[1593]: time="2025-09-12T00:18:30.412635867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-vgq56,Uid:1c15d419-f4da-4b74-81c5-c34a123d9cc5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4915a1d96249a02c47732020c4c70a6c260bf9a19ba671f14ee87eae0c949c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.418411 kubelet[2742]: E0912 00:18:30.418332 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4915a1d96249a02c47732020c4c70a6c260bf9a19ba671f14ee87eae0c949c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.418529 kubelet[2742]: E0912 00:18:30.418451 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4915a1d96249a02c47732020c4c70a6c260bf9a19ba671f14ee87eae0c949c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" Sep 12 00:18:30.418529 kubelet[2742]: E0912 00:18:30.418484 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4915a1d96249a02c47732020c4c70a6c260bf9a19ba671f14ee87eae0c949c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" Sep 12 00:18:30.418615 kubelet[2742]: E0912 00:18:30.418531 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7966cf8c7-vgq56_calico-apiserver(1c15d419-f4da-4b74-81c5-c34a123d9cc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7966cf8c7-vgq56_calico-apiserver(1c15d419-f4da-4b74-81c5-c34a123d9cc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4915a1d96249a02c47732020c4c70a6c260bf9a19ba671f14ee87eae0c949c6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" podUID="1c15d419-f4da-4b74-81c5-c34a123d9cc5" Sep 12 00:18:30.418831 kubelet[2742]: E0912 00:18:30.418804 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b87204a3cdd4bcce10db56eb4d9368ff073317eb30711974fe490b801bc86b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.418890 kubelet[2742]: E0912 00:18:30.418832 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b87204a3cdd4bcce10db56eb4d9368ff073317eb30711974fe490b801bc86b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4lpnl" Sep 12 00:18:30.418890 kubelet[2742]: E0912 00:18:30.418847 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b87204a3cdd4bcce10db56eb4d9368ff073317eb30711974fe490b801bc86b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4lpnl" Sep 12 00:18:30.418890 kubelet[2742]: E0912 00:18:30.418871 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4lpnl_kube-system(85071725-7a62-4ac1-91ba-a54cc8e19425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4lpnl_kube-system(85071725-7a62-4ac1-91ba-a54cc8e19425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b87204a3cdd4bcce10db56eb4d9368ff073317eb30711974fe490b801bc86b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4lpnl" podUID="85071725-7a62-4ac1-91ba-a54cc8e19425" Sep 12 00:18:30.419032 kubelet[2742]: E0912 00:18:30.418901 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"652cb1357dcb95cbfec03eda32e6eab57a4f4da33bfb00770faa8dfcd74b736b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.419032 kubelet[2742]: E0912 00:18:30.418915 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"652cb1357dcb95cbfec03eda32e6eab57a4f4da33bfb00770faa8dfcd74b736b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:30.419032 kubelet[2742]: E0912 00:18:30.418931 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"652cb1357dcb95cbfec03eda32e6eab57a4f4da33bfb00770faa8dfcd74b736b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:30.419614 kubelet[2742]: E0912 00:18:30.418952 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d2x6v_calico-system(5c37c94c-e43d-4388-9658-2398a6df2ea4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d2x6v_calico-system(5c37c94c-e43d-4388-9658-2398a6df2ea4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"652cb1357dcb95cbfec03eda32e6eab57a4f4da33bfb00770faa8dfcd74b736b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:30.419614 kubelet[2742]: E0912 00:18:30.418973 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93592ba31a0f5318c3b9f8cf042edca6c246bcc5a4f62a537890a3ea5621271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.419614 kubelet[2742]: E0912 00:18:30.418988 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93592ba31a0f5318c3b9f8cf042edca6c246bcc5a4f62a537890a3ea5621271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" Sep 12 00:18:30.419761 kubelet[2742]: E0912 00:18:30.419000 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f93592ba31a0f5318c3b9f8cf042edca6c246bcc5a4f62a537890a3ea5621271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" Sep 12 00:18:30.419761 kubelet[2742]: E0912 00:18:30.419020 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d9cb6fbf4-9jgs9_calico-system(e916b2ca-39a3-4524-af0b-67438570f595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d9cb6fbf4-9jgs9_calico-system(e916b2ca-39a3-4524-af0b-67438570f595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f93592ba31a0f5318c3b9f8cf042edca6c246bcc5a4f62a537890a3ea5621271\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" podUID="e916b2ca-39a3-4524-af0b-67438570f595" Sep 12 00:18:30.421942 containerd[1593]: time="2025-09-12T00:18:30.421880094Z" level=error msg="Failed to destroy network for sandbox \"722f3a0c038df6effe0c096bd3be1e4630fa16cd366dd8a07881377263e354a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.425047 containerd[1593]: time="2025-09-12T00:18:30.425009183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4r8r,Uid:98a4da3c-812d-46bd-ae9f-8908fc1692b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"722f3a0c038df6effe0c096bd3be1e4630fa16cd366dd8a07881377263e354a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.425828 kubelet[2742]: E0912 00:18:30.425706 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722f3a0c038df6effe0c096bd3be1e4630fa16cd366dd8a07881377263e354a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.425990 kubelet[2742]: E0912 00:18:30.425797 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722f3a0c038df6effe0c096bd3be1e4630fa16cd366dd8a07881377263e354a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n4r8r" Sep 12 00:18:30.425990 kubelet[2742]: E0912 00:18:30.425934 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722f3a0c038df6effe0c096bd3be1e4630fa16cd366dd8a07881377263e354a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n4r8r" Sep 12 00:18:30.426268 kubelet[2742]: E0912 00:18:30.426203 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n4r8r_kube-system(98a4da3c-812d-46bd-ae9f-8908fc1692b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n4r8r_kube-system(98a4da3c-812d-46bd-ae9f-8908fc1692b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"722f3a0c038df6effe0c096bd3be1e4630fa16cd366dd8a07881377263e354a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n4r8r" podUID="98a4da3c-812d-46bd-ae9f-8908fc1692b4" Sep 12 00:18:30.442444 containerd[1593]: time="2025-09-12T00:18:30.442355829Z" level=error msg="Failed to destroy network for sandbox \"c175dc5c03c2abfe347f3bd58970f4c189c8adbed08b3d084fc6472f0999ac39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.446373 systemd[1]: run-netns-cni\x2de359c496\x2daa33\x2dda76\x2d563d\x2d635f09075b61.mount: Deactivated successfully. Sep 12 00:18:30.447504 containerd[1593]: time="2025-09-12T00:18:30.446383659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ppnzq,Uid:5e62d2f6-3470-41e9-9111-e097821131f8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c175dc5c03c2abfe347f3bd58970f4c189c8adbed08b3d084fc6472f0999ac39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.447597 kubelet[2742]: E0912 00:18:30.446727 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c175dc5c03c2abfe347f3bd58970f4c189c8adbed08b3d084fc6472f0999ac39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.447597 kubelet[2742]: E0912 00:18:30.446798 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c175dc5c03c2abfe347f3bd58970f4c189c8adbed08b3d084fc6472f0999ac39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:30.447597 kubelet[2742]: E0912 00:18:30.446818 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c175dc5c03c2abfe347f3bd58970f4c189c8adbed08b3d084fc6472f0999ac39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:30.448910 kubelet[2742]: E0912 00:18:30.448618 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ppnzq_calico-system(5e62d2f6-3470-41e9-9111-e097821131f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ppnzq_calico-system(5e62d2f6-3470-41e9-9111-e097821131f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c175dc5c03c2abfe347f3bd58970f4c189c8adbed08b3d084fc6472f0999ac39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ppnzq" podUID="5e62d2f6-3470-41e9-9111-e097821131f8" Sep 12 00:18:30.450520 containerd[1593]: time="2025-09-12T00:18:30.450452677Z" level=error msg="Failed to destroy network for sandbox \"d74e9af05a718e81fcc1c6a027dc3249723b810addb6a6716bb427c77c857376\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.454529 systemd[1]: run-netns-cni\x2d2f4347a3\x2d38d3\x2dedc8\x2d3c41\x2d1f9d89b5dba6.mount: Deactivated successfully. Sep 12 00:18:30.456090 containerd[1593]: time="2025-09-12T00:18:30.455688190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bb4bb869-khzms,Uid:54ccc286-153d-466d-bd25-f10ab1e1e2cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d74e9af05a718e81fcc1c6a027dc3249723b810addb6a6716bb427c77c857376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.456227 kubelet[2742]: E0912 00:18:30.455990 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d74e9af05a718e81fcc1c6a027dc3249723b810addb6a6716bb427c77c857376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.456227 kubelet[2742]: E0912 00:18:30.456060 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d74e9af05a718e81fcc1c6a027dc3249723b810addb6a6716bb427c77c857376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bb4bb869-khzms" Sep 12 00:18:30.456768 kubelet[2742]: E0912 00:18:30.456088 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d74e9af05a718e81fcc1c6a027dc3249723b810addb6a6716bb427c77c857376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bb4bb869-khzms" Sep 12 00:18:30.457179 kubelet[2742]: E0912 00:18:30.457134 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54bb4bb869-khzms_calico-system(54ccc286-153d-466d-bd25-f10ab1e1e2cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54bb4bb869-khzms_calico-system(54ccc286-153d-466d-bd25-f10ab1e1e2cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d74e9af05a718e81fcc1c6a027dc3249723b810addb6a6716bb427c77c857376\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54bb4bb869-khzms" podUID="54ccc286-153d-466d-bd25-f10ab1e1e2cc" Sep 12 00:18:30.469408 containerd[1593]: time="2025-09-12T00:18:30.469327439Z" level=error msg="Failed to destroy network for sandbox \"e5fedc6e91ec8bc584ce659e271f9527ff115e1c4a916dfaae1a9f77f4124755\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.471077 containerd[1593]: time="2025-09-12T00:18:30.471024492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-k2jlv,Uid:4f52ff49-8439-42a5-9dd9-7564715fa3b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fedc6e91ec8bc584ce659e271f9527ff115e1c4a916dfaae1a9f77f4124755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.471347 kubelet[2742]: E0912 00:18:30.471301 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fedc6e91ec8bc584ce659e271f9527ff115e1c4a916dfaae1a9f77f4124755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:30.471684 kubelet[2742]: E0912 00:18:30.471378 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fedc6e91ec8bc584ce659e271f9527ff115e1c4a916dfaae1a9f77f4124755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" Sep 12 00:18:30.471684 kubelet[2742]: E0912 00:18:30.471402 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5fedc6e91ec8bc584ce659e271f9527ff115e1c4a916dfaae1a9f77f4124755\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" Sep 12 00:18:30.471684 kubelet[2742]: E0912 00:18:30.471447 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7966cf8c7-k2jlv_calico-apiserver(4f52ff49-8439-42a5-9dd9-7564715fa3b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7966cf8c7-k2jlv_calico-apiserver(4f52ff49-8439-42a5-9dd9-7564715fa3b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5fedc6e91ec8bc584ce659e271f9527ff115e1c4a916dfaae1a9f77f4124755\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" podUID="4f52ff49-8439-42a5-9dd9-7564715fa3b0" Sep 12 00:18:30.472471 systemd[1]: run-netns-cni\x2d13386fba\x2dbcc3\x2d0841\x2da773\x2d3640da2ce8eb.mount: Deactivated successfully. Sep 12 00:18:30.786550 containerd[1593]: time="2025-09-12T00:18:30.786396925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 00:18:41.035877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852293612.mount: Deactivated successfully. Sep 12 00:18:41.198665 kubelet[2742]: E0912 00:18:41.198597 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:41.199251 containerd[1593]: time="2025-09-12T00:18:41.199167651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lpnl,Uid:85071725-7a62-4ac1-91ba-a54cc8e19425,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:42.198045 containerd[1593]: time="2025-09-12T00:18:42.197975105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-k2jlv,Uid:4f52ff49-8439-42a5-9dd9-7564715fa3b0,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:42.198248 containerd[1593]: time="2025-09-12T00:18:42.197984643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9cb6fbf4-9jgs9,Uid:e916b2ca-39a3-4524-af0b-67438570f595,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:42.901045 containerd[1593]: time="2025-09-12T00:18:42.900975611Z" level=error msg="Failed to destroy network for sandbox \"33375d4dcb349e7aeed43037836f94d87fb22bd207578ba9d0fe1f5ef40e5d04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:42.903393 systemd[1]: run-netns-cni\x2d5b5bc1d3\x2d6ab3\x2dcaab\x2d9fec\x2d0b954a749ce0.mount: Deactivated successfully. Sep 12 00:18:42.914409 containerd[1593]: time="2025-09-12T00:18:42.914339196Z" level=error msg="Failed to destroy network for sandbox \"4bc2065a7ed0a5f45307a8f2ac135d638f3a017e5ede59488339fe93fd8e88df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:42.917142 systemd[1]: run-netns-cni\x2d9ac7f284\x2d5065\x2d9c96\x2d92c7\x2d3ddb9ed3f8f5.mount: Deactivated successfully. Sep 12 00:18:42.992572 containerd[1593]: time="2025-09-12T00:18:42.992498586Z" level=error msg="Failed to destroy network for sandbox \"4ab568eedc75018b8004af9f790ff81f21ef4aad40315447696746a908beb06e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:43.198085 kubelet[2742]: E0912 00:18:43.197903 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:43.199368 containerd[1593]: time="2025-09-12T00:18:43.198710740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bb4bb869-khzms,Uid:54ccc286-153d-466d-bd25-f10ab1e1e2cc,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:43.199368 containerd[1593]: time="2025-09-12T00:18:43.198717944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4r8r,Uid:98a4da3c-812d-46bd-ae9f-8908fc1692b4,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:43.472787 containerd[1593]: time="2025-09-12T00:18:43.472515931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lpnl,Uid:85071725-7a62-4ac1-91ba-a54cc8e19425,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33375d4dcb349e7aeed43037836f94d87fb22bd207578ba9d0fe1f5ef40e5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:43.472960 kubelet[2742]: E0912 00:18:43.472823 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33375d4dcb349e7aeed43037836f94d87fb22bd207578ba9d0fe1f5ef40e5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:43.472960 kubelet[2742]: E0912 00:18:43.472892 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33375d4dcb349e7aeed43037836f94d87fb22bd207578ba9d0fe1f5ef40e5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4lpnl" Sep 12 00:18:43.472960 kubelet[2742]: E0912 00:18:43.472912 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33375d4dcb349e7aeed43037836f94d87fb22bd207578ba9d0fe1f5ef40e5d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4lpnl" Sep 12 00:18:43.473071 kubelet[2742]: E0912 00:18:43.472965 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4lpnl_kube-system(85071725-7a62-4ac1-91ba-a54cc8e19425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4lpnl_kube-system(85071725-7a62-4ac1-91ba-a54cc8e19425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33375d4dcb349e7aeed43037836f94d87fb22bd207578ba9d0fe1f5ef40e5d04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4lpnl" podUID="85071725-7a62-4ac1-91ba-a54cc8e19425" Sep 12 00:18:43.538593 containerd[1593]: time="2025-09-12T00:18:43.538503193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-k2jlv,Uid:4f52ff49-8439-42a5-9dd9-7564715fa3b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2065a7ed0a5f45307a8f2ac135d638f3a017e5ede59488339fe93fd8e88df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:43.538903 kubelet[2742]: E0912 00:18:43.538817 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2065a7ed0a5f45307a8f2ac135d638f3a017e5ede59488339fe93fd8e88df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:43.539122 kubelet[2742]: E0912 00:18:43.538915 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2065a7ed0a5f45307a8f2ac135d638f3a017e5ede59488339fe93fd8e88df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" Sep 12 00:18:43.539122 kubelet[2742]: E0912 00:18:43.538946 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bc2065a7ed0a5f45307a8f2ac135d638f3a017e5ede59488339fe93fd8e88df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" Sep 12 00:18:43.539122 kubelet[2742]: E0912 00:18:43.538994 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7966cf8c7-k2jlv_calico-apiserver(4f52ff49-8439-42a5-9dd9-7564715fa3b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7966cf8c7-k2jlv_calico-apiserver(4f52ff49-8439-42a5-9dd9-7564715fa3b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bc2065a7ed0a5f45307a8f2ac135d638f3a017e5ede59488339fe93fd8e88df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" podUID="4f52ff49-8439-42a5-9dd9-7564715fa3b0" Sep 12 00:18:43.543360 systemd[1]: run-netns-cni\x2d3ab33b04\x2d4228\x2d1555\x2d6ee5\x2d0dd2f4355194.mount: Deactivated successfully. Sep 12 00:18:43.705872 containerd[1593]: time="2025-09-12T00:18:43.705785339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9cb6fbf4-9jgs9,Uid:e916b2ca-39a3-4524-af0b-67438570f595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab568eedc75018b8004af9f790ff81f21ef4aad40315447696746a908beb06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:43.706208 kubelet[2742]: E0912 00:18:43.706042 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab568eedc75018b8004af9f790ff81f21ef4aad40315447696746a908beb06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:43.706208 kubelet[2742]: E0912 00:18:43.706139 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab568eedc75018b8004af9f790ff81f21ef4aad40315447696746a908beb06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" Sep 12 00:18:43.706208 kubelet[2742]: E0912 00:18:43.706160 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ab568eedc75018b8004af9f790ff81f21ef4aad40315447696746a908beb06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" Sep 12 00:18:43.706352 kubelet[2742]: E0912 00:18:43.706211 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d9cb6fbf4-9jgs9_calico-system(e916b2ca-39a3-4524-af0b-67438570f595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d9cb6fbf4-9jgs9_calico-system(e916b2ca-39a3-4524-af0b-67438570f595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ab568eedc75018b8004af9f790ff81f21ef4aad40315447696746a908beb06e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" podUID="e916b2ca-39a3-4524-af0b-67438570f595" Sep 12 00:18:43.888566 containerd[1593]: time="2025-09-12T00:18:43.888493000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:44.000088 containerd[1593]: time="2025-09-12T00:18:43.999977304Z" level=error msg="Failed to destroy network for sandbox \"3a1e0cfd35f9b6466a483f500657bf2ff2c4c3df80f72e472d24960f58d05007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.002572 systemd[1]: run-netns-cni\x2da6b4c184\x2d512e\x2dcd60\x2d230f\x2d6010ae3149c9.mount: Deactivated successfully. Sep 12 00:18:44.113782 containerd[1593]: time="2025-09-12T00:18:44.113700104Z" level=error msg="Failed to destroy network for sandbox \"dee837074d4d5cd5d14580fadd55add63fb5dc2ea3a2ed3c1920db86e5a71058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.116280 systemd[1]: run-netns-cni\x2d81889e11\x2dab36\x2d1474\x2dd456\x2dac127b28e420.mount: Deactivated successfully. Sep 12 00:18:44.129491 containerd[1593]: time="2025-09-12T00:18:44.129425921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 00:18:44.199045 containerd[1593]: time="2025-09-12T00:18:44.198589127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ppnzq,Uid:5e62d2f6-3470-41e9-9111-e097821131f8,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:44.199045 containerd[1593]: time="2025-09-12T00:18:44.198641275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-vgq56,Uid:1c15d419-f4da-4b74-81c5-c34a123d9cc5,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:44.199045 containerd[1593]: time="2025-09-12T00:18:44.198949153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2x6v,Uid:5c37c94c-e43d-4388-9658-2398a6df2ea4,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:44.237958 containerd[1593]: time="2025-09-12T00:18:44.237884964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54bb4bb869-khzms,Uid:54ccc286-153d-466d-bd25-f10ab1e1e2cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e0cfd35f9b6466a483f500657bf2ff2c4c3df80f72e472d24960f58d05007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.238255 kubelet[2742]: E0912 00:18:44.238209 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e0cfd35f9b6466a483f500657bf2ff2c4c3df80f72e472d24960f58d05007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.238691 kubelet[2742]: E0912 00:18:44.238280 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e0cfd35f9b6466a483f500657bf2ff2c4c3df80f72e472d24960f58d05007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bb4bb869-khzms" Sep 12 00:18:44.238691 kubelet[2742]: E0912 00:18:44.238306 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a1e0cfd35f9b6466a483f500657bf2ff2c4c3df80f72e472d24960f58d05007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54bb4bb869-khzms" Sep 12 00:18:44.238691 kubelet[2742]: E0912 00:18:44.238358 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54bb4bb869-khzms_calico-system(54ccc286-153d-466d-bd25-f10ab1e1e2cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54bb4bb869-khzms_calico-system(54ccc286-153d-466d-bd25-f10ab1e1e2cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a1e0cfd35f9b6466a483f500657bf2ff2c4c3df80f72e472d24960f58d05007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54bb4bb869-khzms" podUID="54ccc286-153d-466d-bd25-f10ab1e1e2cc" Sep 12 00:18:44.394876 containerd[1593]: time="2025-09-12T00:18:44.394769813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4r8r,Uid:98a4da3c-812d-46bd-ae9f-8908fc1692b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee837074d4d5cd5d14580fadd55add63fb5dc2ea3a2ed3c1920db86e5a71058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.395224 kubelet[2742]: E0912 00:18:44.395159 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee837074d4d5cd5d14580fadd55add63fb5dc2ea3a2ed3c1920db86e5a71058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.395297 kubelet[2742]: E0912 00:18:44.395238 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee837074d4d5cd5d14580fadd55add63fb5dc2ea3a2ed3c1920db86e5a71058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n4r8r" Sep 12 00:18:44.395297 kubelet[2742]: E0912 00:18:44.395271 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee837074d4d5cd5d14580fadd55add63fb5dc2ea3a2ed3c1920db86e5a71058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n4r8r" Sep 12 00:18:44.395386 kubelet[2742]: E0912 00:18:44.395329 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n4r8r_kube-system(98a4da3c-812d-46bd-ae9f-8908fc1692b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n4r8r_kube-system(98a4da3c-812d-46bd-ae9f-8908fc1692b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dee837074d4d5cd5d14580fadd55add63fb5dc2ea3a2ed3c1920db86e5a71058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n4r8r" podUID="98a4da3c-812d-46bd-ae9f-8908fc1692b4" Sep 12 00:18:44.468935 containerd[1593]: time="2025-09-12T00:18:44.468789210Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:44.630269 containerd[1593]: time="2025-09-12T00:18:44.630214427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:44.630871 containerd[1593]: time="2025-09-12T00:18:44.630820345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 13.844382684s" Sep 12 00:18:44.630924 containerd[1593]: time="2025-09-12T00:18:44.630870590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 00:18:44.660574 containerd[1593]: time="2025-09-12T00:18:44.660513797Z" level=info msg="CreateContainer within sandbox \"7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 00:18:44.708022 containerd[1593]: time="2025-09-12T00:18:44.692266436Z" level=info msg="Container 4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:44.719867 containerd[1593]: time="2025-09-12T00:18:44.719590017Z" level=error msg="Failed to destroy network for sandbox \"d09ce2c7103b6b0f379cee5168425f1061caa18f0749f57ed2b3f0954c0ffe5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.725399 containerd[1593]: time="2025-09-12T00:18:44.725341029Z" level=error msg="Failed to destroy network for sandbox \"a8cb7005ac0851b5c97a024f0832fe7eeea0ccc5e1147bf3f8b015b692a4b948\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.730906 containerd[1593]: time="2025-09-12T00:18:44.730846359Z" level=error msg="Failed to destroy network for sandbox \"663ba5cab641ceb09a1d77a803b4af3e50a2aaa8cf11523432377e124f3b317f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.898567 containerd[1593]: time="2025-09-12T00:18:44.898420816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ppnzq,Uid:5e62d2f6-3470-41e9-9111-e097821131f8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09ce2c7103b6b0f379cee5168425f1061caa18f0749f57ed2b3f0954c0ffe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.898924 kubelet[2742]: E0912 00:18:44.898849 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09ce2c7103b6b0f379cee5168425f1061caa18f0749f57ed2b3f0954c0ffe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.898924 kubelet[2742]: E0912 00:18:44.898925 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09ce2c7103b6b0f379cee5168425f1061caa18f0749f57ed2b3f0954c0ffe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:44.899066 kubelet[2742]: E0912 00:18:44.898948 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09ce2c7103b6b0f379cee5168425f1061caa18f0749f57ed2b3f0954c0ffe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-ppnzq" Sep 12 00:18:44.899066 kubelet[2742]: E0912 00:18:44.899000 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-ppnzq_calico-system(5e62d2f6-3470-41e9-9111-e097821131f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-ppnzq_calico-system(5e62d2f6-3470-41e9-9111-e097821131f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d09ce2c7103b6b0f379cee5168425f1061caa18f0749f57ed2b3f0954c0ffe5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-ppnzq" podUID="5e62d2f6-3470-41e9-9111-e097821131f8" Sep 12 00:18:44.904864 containerd[1593]: time="2025-09-12T00:18:44.904715869Z" level=info msg="CreateContainer within sandbox \"7f5de40c0da02039a87bb1305078ad8efa4f96d94924c0e5507147903f653c7d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\"" Sep 12 00:18:44.914661 containerd[1593]: time="2025-09-12T00:18:44.914587401Z" level=info msg="StartContainer for \"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\"" Sep 12 00:18:44.916567 containerd[1593]: time="2025-09-12T00:18:44.916531913Z" level=info msg="connecting to shim 4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3" address="unix:///run/containerd/s/7ac9a141ff47592a60ac452d0052c0fae911cd18d704f95262c615264f6eeaef" protocol=ttrpc version=3 Sep 12 00:18:44.942430 systemd[1]: Started cri-containerd-4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3.scope - libcontainer container 4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3. Sep 12 00:18:44.967635 containerd[1593]: time="2025-09-12T00:18:44.967561349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-vgq56,Uid:1c15d419-f4da-4b74-81c5-c34a123d9cc5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cb7005ac0851b5c97a024f0832fe7eeea0ccc5e1147bf3f8b015b692a4b948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.967875 kubelet[2742]: E0912 00:18:44.967842 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cb7005ac0851b5c97a024f0832fe7eeea0ccc5e1147bf3f8b015b692a4b948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:44.967960 kubelet[2742]: E0912 00:18:44.967911 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cb7005ac0851b5c97a024f0832fe7eeea0ccc5e1147bf3f8b015b692a4b948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" Sep 12 00:18:44.967960 kubelet[2742]: E0912 00:18:44.967944 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cb7005ac0851b5c97a024f0832fe7eeea0ccc5e1147bf3f8b015b692a4b948\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" Sep 12 00:18:44.968137 kubelet[2742]: E0912 00:18:44.968074 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7966cf8c7-vgq56_calico-apiserver(1c15d419-f4da-4b74-81c5-c34a123d9cc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7966cf8c7-vgq56_calico-apiserver(1c15d419-f4da-4b74-81c5-c34a123d9cc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8cb7005ac0851b5c97a024f0832fe7eeea0ccc5e1147bf3f8b015b692a4b948\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" podUID="1c15d419-f4da-4b74-81c5-c34a123d9cc5" Sep 12 00:18:44.999987 containerd[1593]: time="2025-09-12T00:18:44.999830839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2x6v,Uid:5c37c94c-e43d-4388-9658-2398a6df2ea4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"663ba5cab641ceb09a1d77a803b4af3e50a2aaa8cf11523432377e124f3b317f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:45.000303 kubelet[2742]: E0912 00:18:45.000168 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663ba5cab641ceb09a1d77a803b4af3e50a2aaa8cf11523432377e124f3b317f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:18:45.000303 kubelet[2742]: E0912 00:18:45.000251 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663ba5cab641ceb09a1d77a803b4af3e50a2aaa8cf11523432377e124f3b317f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:45.000303 kubelet[2742]: E0912 00:18:45.000280 2742 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663ba5cab641ceb09a1d77a803b4af3e50a2aaa8cf11523432377e124f3b317f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2x6v" Sep 12 00:18:45.000558 kubelet[2742]: E0912 00:18:45.000326 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d2x6v_calico-system(5c37c94c-e43d-4388-9658-2398a6df2ea4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d2x6v_calico-system(5c37c94c-e43d-4388-9658-2398a6df2ea4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"663ba5cab641ceb09a1d77a803b4af3e50a2aaa8cf11523432377e124f3b317f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d2x6v" podUID="5c37c94c-e43d-4388-9658-2398a6df2ea4" Sep 12 00:18:45.073240 containerd[1593]: time="2025-09-12T00:18:45.073198540Z" level=info msg="StartContainer for \"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" returns successfully" Sep 12 00:18:45.078920 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 00:18:45.078988 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 00:18:45.565942 systemd[1]: run-netns-cni\x2d5adf5213\x2d3578\x2d3f07\x2de857\x2d5ad14b20773a.mount: Deactivated successfully. Sep 12 00:18:45.566083 systemd[1]: run-netns-cni\x2d2d65b2bd\x2db438\x2d6d60\x2d2911\x2d1591fc742bbe.mount: Deactivated successfully. Sep 12 00:18:45.566205 systemd[1]: run-netns-cni\x2da32b7ab2\x2d2702\x2dd253\x2df4d8\x2d838b3319671b.mount: Deactivated successfully. Sep 12 00:18:45.787512 kubelet[2742]: I0912 00:18:45.787460 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mznd\" (UniqueName: \"kubernetes.io/projected/54ccc286-153d-466d-bd25-f10ab1e1e2cc-kube-api-access-5mznd\") pod \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\" (UID: \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\") " Sep 12 00:18:45.787512 kubelet[2742]: I0912 00:18:45.787521 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-backend-key-pair\") pod \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\" (UID: \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\") " Sep 12 00:18:45.788048 kubelet[2742]: I0912 00:18:45.787549 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-ca-bundle\") pod \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\" (UID: \"54ccc286-153d-466d-bd25-f10ab1e1e2cc\") " Sep 12 00:18:45.788075 kubelet[2742]: I0912 00:18:45.788042 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "54ccc286-153d-466d-bd25-f10ab1e1e2cc" (UID: "54ccc286-153d-466d-bd25-f10ab1e1e2cc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 00:18:45.793434 systemd[1]: var-lib-kubelet-pods-54ccc286\x2d153d\x2d466d\x2dbd25\x2df10ab1e1e2cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5mznd.mount: Deactivated successfully. Sep 12 00:18:45.793597 systemd[1]: var-lib-kubelet-pods-54ccc286\x2d153d\x2d466d\x2dbd25\x2df10ab1e1e2cc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 00:18:45.794979 kubelet[2742]: I0912 00:18:45.794920 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "54ccc286-153d-466d-bd25-f10ab1e1e2cc" (UID: "54ccc286-153d-466d-bd25-f10ab1e1e2cc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 00:18:45.795068 kubelet[2742]: I0912 00:18:45.794929 2742 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ccc286-153d-466d-bd25-f10ab1e1e2cc-kube-api-access-5mznd" (OuterVolumeSpecName: "kube-api-access-5mznd") pod "54ccc286-153d-466d-bd25-f10ab1e1e2cc" (UID: "54ccc286-153d-466d-bd25-f10ab1e1e2cc"). InnerVolumeSpecName "kube-api-access-5mznd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 00:18:45.833964 systemd[1]: Removed slice kubepods-besteffort-pod54ccc286_153d_466d_bd25_f10ab1e1e2cc.slice - libcontainer container kubepods-besteffort-pod54ccc286_153d_466d_bd25_f10ab1e1e2cc.slice. Sep 12 00:18:45.888229 kubelet[2742]: I0912 00:18:45.888177 2742 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 00:18:45.888229 kubelet[2742]: I0912 00:18:45.888219 2742 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54ccc286-153d-466d-bd25-f10ab1e1e2cc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 00:18:45.888229 kubelet[2742]: I0912 00:18:45.888233 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mznd\" (UniqueName: \"kubernetes.io/projected/54ccc286-153d-466d-bd25-f10ab1e1e2cc-kube-api-access-5mznd\") on node \"localhost\" DevicePath \"\"" Sep 12 00:18:45.961849 containerd[1593]: time="2025-09-12T00:18:45.961799314Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"66c9b507fe02e20ad08e50545ed03fd810509e63fd83efd269254b6a793bb7f6\" pid:4135 exit_status:1 exited_at:{seconds:1757636325 nanos:961438416}" Sep 12 00:18:46.255127 kubelet[2742]: I0912 00:18:46.255035 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7w2gt" podStartSLOduration=2.934594855 podStartE2EDuration="33.255003449s" podCreationTimestamp="2025-09-12 00:18:13 +0000 UTC" firstStartedPulling="2025-09-12 00:18:14.316165127 +0000 UTC m=+19.221728019" lastFinishedPulling="2025-09-12 00:18:44.636573721 +0000 UTC m=+49.542136613" observedRunningTime="2025-09-12 00:18:46.116305343 +0000 UTC m=+51.021868265" watchObservedRunningTime="2025-09-12 00:18:46.255003449 +0000 UTC m=+51.160566341" Sep 12 00:18:46.927278 systemd[1]: Created slice kubepods-besteffort-podc4a13b87_c48f_43bd_962a_1371f5f70ab7.slice - libcontainer container kubepods-besteffort-podc4a13b87_c48f_43bd_962a_1371f5f70ab7.slice. Sep 12 00:18:46.928064 containerd[1593]: time="2025-09-12T00:18:46.927957077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"5bf8a07209bc2aefcac0a202d2c59feb03ba02a34a495482f272ec234060a26f\" pid:4171 exit_status:1 exited_at:{seconds:1757636326 nanos:927637538}" Sep 12 00:18:46.994831 kubelet[2742]: I0912 00:18:46.994755 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7wh\" (UniqueName: \"kubernetes.io/projected/c4a13b87-c48f-43bd-962a-1371f5f70ab7-kube-api-access-wl7wh\") pod \"whisker-7f59985d44-qqvqd\" (UID: \"c4a13b87-c48f-43bd-962a-1371f5f70ab7\") " pod="calico-system/whisker-7f59985d44-qqvqd" Sep 12 00:18:46.994831 kubelet[2742]: I0912 00:18:46.994838 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4a13b87-c48f-43bd-962a-1371f5f70ab7-whisker-backend-key-pair\") pod \"whisker-7f59985d44-qqvqd\" (UID: \"c4a13b87-c48f-43bd-962a-1371f5f70ab7\") " pod="calico-system/whisker-7f59985d44-qqvqd" Sep 12 00:18:46.995390 kubelet[2742]: I0912 00:18:46.994867 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a13b87-c48f-43bd-962a-1371f5f70ab7-whisker-ca-bundle\") pod \"whisker-7f59985d44-qqvqd\" (UID: \"c4a13b87-c48f-43bd-962a-1371f5f70ab7\") " pod="calico-system/whisker-7f59985d44-qqvqd" Sep 12 00:18:47.200554 kubelet[2742]: I0912 00:18:47.200437 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ccc286-153d-466d-bd25-f10ab1e1e2cc" path="/var/lib/kubelet/pods/54ccc286-153d-466d-bd25-f10ab1e1e2cc/volumes" Sep 12 00:18:47.531787 containerd[1593]: time="2025-09-12T00:18:47.531632692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f59985d44-qqvqd,Uid:c4a13b87-c48f-43bd-962a-1371f5f70ab7,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:47.882754 systemd-networkd[1499]: cali0ef9af593e2: Link UP Sep 12 00:18:47.884247 systemd-networkd[1499]: cali0ef9af593e2: Gained carrier Sep 12 00:18:47.971336 containerd[1593]: 2025-09-12 00:18:47.557 [INFO][4186] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 00:18:47.971336 containerd[1593]: 2025-09-12 00:18:47.580 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f59985d44--qqvqd-eth0 whisker-7f59985d44- calico-system c4a13b87-c48f-43bd-962a-1371f5f70ab7 931 0 2025-09-12 00:18:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f59985d44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f59985d44-qqvqd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0ef9af593e2 [] [] }} ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-" Sep 12 00:18:47.971336 containerd[1593]: 2025-09-12 00:18:47.580 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" Sep 12 00:18:47.971336 containerd[1593]: 2025-09-12 00:18:47.678 [INFO][4209] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" HandleID="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Workload="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.679 [INFO][4209] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" HandleID="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Workload="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f0020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f59985d44-qqvqd", "timestamp":"2025-09-12 00:18:47.678541528 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.679 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.680 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.680 [INFO][4209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.691 [INFO][4209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" host="localhost" Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.701 [INFO][4209] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.710 [INFO][4209] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.715 [INFO][4209] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.718 [INFO][4209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:47.971906 containerd[1593]: 2025-09-12 00:18:47.718 [INFO][4209] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" host="localhost" Sep 12 00:18:47.972197 containerd[1593]: 2025-09-12 00:18:47.721 [INFO][4209] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c Sep 12 00:18:47.972197 containerd[1593]: 2025-09-12 00:18:47.790 [INFO][4209] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" host="localhost" Sep 12 00:18:47.972197 containerd[1593]: 2025-09-12 00:18:47.868 [INFO][4209] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" host="localhost" Sep 12 00:18:47.972197 containerd[1593]: 2025-09-12 00:18:47.868 [INFO][4209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" host="localhost" Sep 12 00:18:47.972197 containerd[1593]: 2025-09-12 00:18:47.868 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:47.972197 containerd[1593]: 2025-09-12 00:18:47.868 [INFO][4209] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" HandleID="k8s-pod-network.308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Workload="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" Sep 12 00:18:47.972326 containerd[1593]: 2025-09-12 00:18:47.873 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f59985d44--qqvqd-eth0", GenerateName:"whisker-7f59985d44-", Namespace:"calico-system", SelfLink:"", UID:"c4a13b87-c48f-43bd-962a-1371f5f70ab7", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f59985d44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f59985d44-qqvqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0ef9af593e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:47.972326 containerd[1593]: 2025-09-12 00:18:47.873 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" Sep 12 00:18:47.972418 containerd[1593]: 2025-09-12 00:18:47.873 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ef9af593e2 ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" Sep 12 00:18:47.972418 containerd[1593]: 2025-09-12 00:18:47.883 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" Sep 12 00:18:47.972463 containerd[1593]: 2025-09-12 00:18:47.884 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f59985d44--qqvqd-eth0", GenerateName:"whisker-7f59985d44-", Namespace:"calico-system", SelfLink:"", UID:"c4a13b87-c48f-43bd-962a-1371f5f70ab7", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f59985d44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c", Pod:"whisker-7f59985d44-qqvqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0ef9af593e2", MAC:"62:8b:70:11:a9:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:47.972512 containerd[1593]: 2025-09-12 00:18:47.966 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" Namespace="calico-system" Pod="whisker-7f59985d44-qqvqd" WorkloadEndpoint="localhost-k8s-whisker--7f59985d44--qqvqd-eth0" Sep 12 00:18:48.761607 systemd-networkd[1499]: vxlan.calico: Link UP Sep 12 00:18:48.761627 systemd-networkd[1499]: vxlan.calico: Gained carrier Sep 12 00:18:48.846070 containerd[1593]: time="2025-09-12T00:18:48.846009212Z" level=info msg="connecting to shim 308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c" address="unix:///run/containerd/s/78b4baddf973227fa8c90f74abcaba6671d4fcbf6ce2098f2e3d52ab7d3a5ef0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:48.881312 systemd[1]: Started cri-containerd-308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c.scope - libcontainer container 308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c. Sep 12 00:18:48.898999 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:49.026955 containerd[1593]: time="2025-09-12T00:18:49.026731206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f59985d44-qqvqd,Uid:c4a13b87-c48f-43bd-962a-1371f5f70ab7,Namespace:calico-system,Attempt:0,} returns sandbox id \"308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c\"" Sep 12 00:18:49.035073 containerd[1593]: time="2025-09-12T00:18:49.034911474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 00:18:49.811304 systemd-networkd[1499]: cali0ef9af593e2: Gained IPv6LL Sep 12 00:18:50.067322 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Sep 12 00:18:50.808891 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:39226.service - OpenSSH per-connection server daemon (10.0.0.1:39226). Sep 12 00:18:50.903926 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 39226 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:18:50.904942 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:50.911419 systemd-logind[1577]: New session 8 of user core. Sep 12 00:18:50.916248 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 00:18:50.964167 containerd[1593]: time="2025-09-12T00:18:50.964079695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:50.965273 containerd[1593]: time="2025-09-12T00:18:50.965197093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 00:18:50.966577 containerd[1593]: time="2025-09-12T00:18:50.966512412Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:50.974817 containerd[1593]: time="2025-09-12T00:18:50.974758504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:50.975554 containerd[1593]: time="2025-09-12T00:18:50.975528128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.940562412s" Sep 12 00:18:50.975612 containerd[1593]: time="2025-09-12T00:18:50.975556020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 00:18:50.986532 containerd[1593]: time="2025-09-12T00:18:50.986455564Z" level=info msg="CreateContainer within sandbox \"308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 00:18:50.995055 containerd[1593]: time="2025-09-12T00:18:50.994998383Z" level=info msg="Container 6e53d7f6d6d2cb6ca5f339c07d30685b87e652020a057926dc0206d150767654: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:51.005399 containerd[1593]: time="2025-09-12T00:18:51.005355457Z" level=info msg="CreateContainer within sandbox \"308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6e53d7f6d6d2cb6ca5f339c07d30685b87e652020a057926dc0206d150767654\"" Sep 12 00:18:51.009889 containerd[1593]: time="2025-09-12T00:18:51.009838223Z" level=info msg="StartContainer for \"6e53d7f6d6d2cb6ca5f339c07d30685b87e652020a057926dc0206d150767654\"" Sep 12 00:18:51.018932 containerd[1593]: time="2025-09-12T00:18:51.018876039Z" level=info msg="connecting to shim 6e53d7f6d6d2cb6ca5f339c07d30685b87e652020a057926dc0206d150767654" address="unix:///run/containerd/s/78b4baddf973227fa8c90f74abcaba6671d4fcbf6ce2098f2e3d52ab7d3a5ef0" protocol=ttrpc version=3 Sep 12 00:18:51.054566 systemd[1]: Started cri-containerd-6e53d7f6d6d2cb6ca5f339c07d30685b87e652020a057926dc0206d150767654.scope - libcontainer container 6e53d7f6d6d2cb6ca5f339c07d30685b87e652020a057926dc0206d150767654. Sep 12 00:18:51.096770 sshd[4471]: Connection closed by 10.0.0.1 port 39226 Sep 12 00:18:51.097982 sshd-session[4464]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:51.103662 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:39226.service: Deactivated successfully. Sep 12 00:18:51.106487 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 00:18:51.108209 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Sep 12 00:18:51.110307 systemd-logind[1577]: Removed session 8. Sep 12 00:18:51.114387 containerd[1593]: time="2025-09-12T00:18:51.114349375Z" level=info msg="StartContainer for \"6e53d7f6d6d2cb6ca5f339c07d30685b87e652020a057926dc0206d150767654\" returns successfully" Sep 12 00:18:51.116374 containerd[1593]: time="2025-09-12T00:18:51.116348167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 00:18:53.497712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718647163.mount: Deactivated successfully. Sep 12 00:18:53.667787 containerd[1593]: time="2025-09-12T00:18:53.667714717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:53.668527 containerd[1593]: time="2025-09-12T00:18:53.668486866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 00:18:53.669876 containerd[1593]: time="2025-09-12T00:18:53.669841189Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:53.671894 containerd[1593]: time="2025-09-12T00:18:53.671866591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:53.672514 containerd[1593]: time="2025-09-12T00:18:53.672481946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.556106649s" Sep 12 00:18:53.672514 containerd[1593]: time="2025-09-12T00:18:53.672513004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 00:18:53.674659 containerd[1593]: time="2025-09-12T00:18:53.674620791Z" level=info msg="CreateContainer within sandbox \"308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 00:18:53.684805 containerd[1593]: time="2025-09-12T00:18:53.683992092Z" level=info msg="Container 1e07b25c5becfebf5549ebcf751605079cf171d5c9ad85abdfad64db296088e2: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:53.693629 containerd[1593]: time="2025-09-12T00:18:53.693588967Z" level=info msg="CreateContainer within sandbox \"308d3e65fe7f864f87575d787f7ddb1af220bbcac5c0fe07320a16719421063c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1e07b25c5becfebf5549ebcf751605079cf171d5c9ad85abdfad64db296088e2\"" Sep 12 00:18:53.694094 containerd[1593]: time="2025-09-12T00:18:53.694063887Z" level=info msg="StartContainer for \"1e07b25c5becfebf5549ebcf751605079cf171d5c9ad85abdfad64db296088e2\"" Sep 12 00:18:53.695434 containerd[1593]: time="2025-09-12T00:18:53.695383535Z" level=info msg="connecting to shim 1e07b25c5becfebf5549ebcf751605079cf171d5c9ad85abdfad64db296088e2" address="unix:///run/containerd/s/78b4baddf973227fa8c90f74abcaba6671d4fcbf6ce2098f2e3d52ab7d3a5ef0" protocol=ttrpc version=3 Sep 12 00:18:53.728386 systemd[1]: Started cri-containerd-1e07b25c5becfebf5549ebcf751605079cf171d5c9ad85abdfad64db296088e2.scope - libcontainer container 1e07b25c5becfebf5549ebcf751605079cf171d5c9ad85abdfad64db296088e2. Sep 12 00:18:53.798571 containerd[1593]: time="2025-09-12T00:18:53.798456266Z" level=info msg="StartContainer for \"1e07b25c5becfebf5549ebcf751605079cf171d5c9ad85abdfad64db296088e2\" returns successfully" Sep 12 00:18:54.198200 kubelet[2742]: E0912 00:18:54.198147 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:54.198764 containerd[1593]: time="2025-09-12T00:18:54.198577389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-k2jlv,Uid:4f52ff49-8439-42a5-9dd9-7564715fa3b0,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:54.198893 containerd[1593]: time="2025-09-12T00:18:54.198577379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lpnl,Uid:85071725-7a62-4ac1-91ba-a54cc8e19425,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:54.318873 systemd-networkd[1499]: cali370caae08e2: Link UP Sep 12 00:18:54.319549 systemd-networkd[1499]: cali370caae08e2: Gained carrier Sep 12 00:18:54.337919 containerd[1593]: 2025-09-12 00:18:54.248 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0 calico-apiserver-7966cf8c7- calico-apiserver 4f52ff49-8439-42a5-9dd9-7564715fa3b0 838 0 2025-09-12 00:18:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7966cf8c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7966cf8c7-k2jlv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali370caae08e2 [] [] }} ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-" Sep 12 00:18:54.337919 containerd[1593]: 2025-09-12 00:18:54.248 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" Sep 12 00:18:54.337919 containerd[1593]: 2025-09-12 00:18:54.278 [INFO][4606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" HandleID="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Workload="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.278 [INFO][4606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" HandleID="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Workload="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7966cf8c7-k2jlv", "timestamp":"2025-09-12 00:18:54.278605967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.278 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.279 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.279 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.286 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" host="localhost" Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.292 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.297 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.298 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.300 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:54.338937 containerd[1593]: 2025-09-12 00:18:54.300 [INFO][4606] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" host="localhost" Sep 12 00:18:54.339621 containerd[1593]: 2025-09-12 00:18:54.302 [INFO][4606] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4 Sep 12 00:18:54.339621 containerd[1593]: 2025-09-12 00:18:54.305 [INFO][4606] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" host="localhost" Sep 12 00:18:54.339621 containerd[1593]: 2025-09-12 00:18:54.310 [INFO][4606] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" host="localhost" Sep 12 00:18:54.339621 containerd[1593]: 2025-09-12 00:18:54.310 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" host="localhost" Sep 12 00:18:54.339621 containerd[1593]: 2025-09-12 00:18:54.311 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:54.339621 containerd[1593]: 2025-09-12 00:18:54.311 [INFO][4606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" HandleID="k8s-pod-network.d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Workload="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" Sep 12 00:18:54.339794 kubelet[2742]: I0912 00:18:54.339143 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f59985d44-qqvqd" podStartSLOduration=3.700387392 podStartE2EDuration="8.339089607s" podCreationTimestamp="2025-09-12 00:18:46 +0000 UTC" firstStartedPulling="2025-09-12 00:18:49.034551439 +0000 UTC m=+53.940114331" lastFinishedPulling="2025-09-12 00:18:53.673253654 +0000 UTC m=+58.578816546" observedRunningTime="2025-09-12 00:18:53.88839311 +0000 UTC m=+58.793956092" watchObservedRunningTime="2025-09-12 00:18:54.339089607 +0000 UTC m=+59.244652509" Sep 12 00:18:54.340378 containerd[1593]: 2025-09-12 00:18:54.314 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0", GenerateName:"calico-apiserver-7966cf8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f52ff49-8439-42a5-9dd9-7564715fa3b0", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7966cf8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7966cf8c7-k2jlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali370caae08e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:54.340469 containerd[1593]: 2025-09-12 00:18:54.314 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" Sep 12 00:18:54.340469 containerd[1593]: 2025-09-12 00:18:54.314 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali370caae08e2 ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" Sep 12 00:18:54.340469 containerd[1593]: 2025-09-12 00:18:54.318 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" Sep 12 00:18:54.340569 containerd[1593]: 2025-09-12 00:18:54.318 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0", GenerateName:"calico-apiserver-7966cf8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f52ff49-8439-42a5-9dd9-7564715fa3b0", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7966cf8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4", Pod:"calico-apiserver-7966cf8c7-k2jlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali370caae08e2", MAC:"b6:37:c5:da:35:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:54.340723 containerd[1593]: 2025-09-12 00:18:54.328 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-k2jlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--k2jlv-eth0" Sep 12 00:18:54.440933 systemd-networkd[1499]: cali85ae123233c: Link UP Sep 12 00:18:54.441174 systemd-networkd[1499]: cali85ae123233c: Gained carrier Sep 12 00:18:54.452875 containerd[1593]: time="2025-09-12T00:18:54.450985821Z" level=info msg="connecting to shim d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4" address="unix:///run/containerd/s/f314fec17291607769be4588fb150ff33aeebe950cf4a43ca78b80521f793082" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:54.461320 containerd[1593]: 2025-09-12 00:18:54.245 [INFO][4583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0 coredns-668d6bf9bc- kube-system 85071725-7a62-4ac1-91ba-a54cc8e19425 830 0 2025-09-12 00:18:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4lpnl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85ae123233c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-" Sep 12 00:18:54.461320 containerd[1593]: 2025-09-12 00:18:54.246 [INFO][4583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" Sep 12 00:18:54.461320 containerd[1593]: 2025-09-12 00:18:54.283 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" HandleID="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Workload="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.283 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" HandleID="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Workload="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002872d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4lpnl", "timestamp":"2025-09-12 00:18:54.28318955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.283 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.311 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.311 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.387 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" host="localhost" Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.392 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.397 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.399 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.401 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:54.461594 containerd[1593]: 2025-09-12 00:18:54.401 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" host="localhost" Sep 12 00:18:54.462272 containerd[1593]: 2025-09-12 00:18:54.403 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092 Sep 12 00:18:54.462272 containerd[1593]: 2025-09-12 00:18:54.425 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" host="localhost" Sep 12 00:18:54.462272 containerd[1593]: 2025-09-12 00:18:54.433 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" host="localhost" Sep 12 00:18:54.462272 containerd[1593]: 2025-09-12 00:18:54.433 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" host="localhost" Sep 12 00:18:54.462272 containerd[1593]: 2025-09-12 00:18:54.433 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:54.462272 containerd[1593]: 2025-09-12 00:18:54.433 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" HandleID="k8s-pod-network.3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Workload="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" Sep 12 00:18:54.462524 containerd[1593]: 2025-09-12 00:18:54.436 [INFO][4583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85071725-7a62-4ac1-91ba-a54cc8e19425", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4lpnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85ae123233c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:54.462648 containerd[1593]: 2025-09-12 00:18:54.437 [INFO][4583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" Sep 12 00:18:54.462648 containerd[1593]: 2025-09-12 00:18:54.437 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85ae123233c ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" Sep 12 00:18:54.462648 containerd[1593]: 2025-09-12 00:18:54.440 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" Sep 12 00:18:54.462844 containerd[1593]: 2025-09-12 00:18:54.446 [INFO][4583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85071725-7a62-4ac1-91ba-a54cc8e19425", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092", Pod:"coredns-668d6bf9bc-4lpnl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85ae123233c", MAC:"9a:2d:6a:37:a3:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:54.462844 containerd[1593]: 2025-09-12 00:18:54.457 [INFO][4583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" Namespace="kube-system" Pod="coredns-668d6bf9bc-4lpnl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4lpnl-eth0" Sep 12 00:18:54.487332 systemd[1]: Started cri-containerd-d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4.scope - libcontainer container d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4. Sep 12 00:18:54.507354 containerd[1593]: time="2025-09-12T00:18:54.507289415Z" level=info msg="connecting to shim 3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092" address="unix:///run/containerd/s/6e9fa92ce7cb6d168759cfd1672d516d1f4453d8787f30d0410f22be03b14d0b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:54.515290 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:54.540343 systemd[1]: Started cri-containerd-3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092.scope - libcontainer container 3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092. Sep 12 00:18:54.550242 containerd[1593]: time="2025-09-12T00:18:54.550130640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-k2jlv,Uid:4f52ff49-8439-42a5-9dd9-7564715fa3b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4\"" Sep 12 00:18:54.552430 containerd[1593]: time="2025-09-12T00:18:54.552393587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 00:18:54.557448 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:54.591814 containerd[1593]: time="2025-09-12T00:18:54.591753849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4lpnl,Uid:85071725-7a62-4ac1-91ba-a54cc8e19425,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092\"" Sep 12 00:18:54.592655 kubelet[2742]: E0912 00:18:54.592631 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:54.594609 containerd[1593]: time="2025-09-12T00:18:54.594569835Z" level=info msg="CreateContainer within sandbox \"3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:18:54.611549 containerd[1593]: time="2025-09-12T00:18:54.611498600Z" level=info msg="Container aa2397d2787e1a7956937e7fde6ce4b5a23cb625e2cbf0696e59f5b0459793b7: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:54.619448 containerd[1593]: time="2025-09-12T00:18:54.619407296Z" level=info msg="CreateContainer within sandbox \"3b068927929609002a95ae249abf493fca23d0dbac7d30f52d9ea2f2b68a7092\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa2397d2787e1a7956937e7fde6ce4b5a23cb625e2cbf0696e59f5b0459793b7\"" Sep 12 00:18:54.619924 containerd[1593]: time="2025-09-12T00:18:54.619897716Z" level=info msg="StartContainer for \"aa2397d2787e1a7956937e7fde6ce4b5a23cb625e2cbf0696e59f5b0459793b7\"" Sep 12 00:18:54.620678 containerd[1593]: time="2025-09-12T00:18:54.620652031Z" level=info msg="connecting to shim aa2397d2787e1a7956937e7fde6ce4b5a23cb625e2cbf0696e59f5b0459793b7" address="unix:///run/containerd/s/6e9fa92ce7cb6d168759cfd1672d516d1f4453d8787f30d0410f22be03b14d0b" protocol=ttrpc version=3 Sep 12 00:18:54.643303 systemd[1]: Started cri-containerd-aa2397d2787e1a7956937e7fde6ce4b5a23cb625e2cbf0696e59f5b0459793b7.scope - libcontainer container aa2397d2787e1a7956937e7fde6ce4b5a23cb625e2cbf0696e59f5b0459793b7. Sep 12 00:18:54.681765 containerd[1593]: time="2025-09-12T00:18:54.681657763Z" level=info msg="StartContainer for \"aa2397d2787e1a7956937e7fde6ce4b5a23cb625e2cbf0696e59f5b0459793b7\" returns successfully" Sep 12 00:18:54.859943 kubelet[2742]: E0912 00:18:54.859904 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:54.872363 kubelet[2742]: I0912 00:18:54.872287 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4lpnl" podStartSLOduration=53.872268117 podStartE2EDuration="53.872268117s" podCreationTimestamp="2025-09-12 00:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:18:54.871943176 +0000 UTC m=+59.777506058" watchObservedRunningTime="2025-09-12 00:18:54.872268117 +0000 UTC m=+59.777831009" Sep 12 00:18:55.198773 kubelet[2742]: E0912 00:18:55.198197 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:55.199501 containerd[1593]: time="2025-09-12T00:18:55.198644473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4r8r,Uid:98a4da3c-812d-46bd-ae9f-8908fc1692b4,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:55.321456 systemd-networkd[1499]: cali3db38f8ce28: Link UP Sep 12 00:18:55.322639 systemd-networkd[1499]: cali3db38f8ce28: Gained carrier Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.244 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0 coredns-668d6bf9bc- kube-system 98a4da3c-812d-46bd-ae9f-8908fc1692b4 835 0 2025-09-12 00:18:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-n4r8r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3db38f8ce28 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.244 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.274 [INFO][4786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" HandleID="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Workload="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.274 [INFO][4786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" HandleID="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Workload="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-n4r8r", "timestamp":"2025-09-12 00:18:55.274162097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.274 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.274 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.274 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.282 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.289 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.295 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.297 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.300 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.300 [INFO][4786] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.302 [INFO][4786] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873 Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.306 [INFO][4786] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.315 [INFO][4786] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.315 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" host="localhost" Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.315 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:55.341650 containerd[1593]: 2025-09-12 00:18:55.315 [INFO][4786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" HandleID="k8s-pod-network.810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Workload="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" Sep 12 00:18:55.342329 containerd[1593]: 2025-09-12 00:18:55.318 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98a4da3c-812d-46bd-ae9f-8908fc1692b4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-n4r8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db38f8ce28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:55.342329 containerd[1593]: 2025-09-12 00:18:55.318 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" Sep 12 00:18:55.342329 containerd[1593]: 2025-09-12 00:18:55.319 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3db38f8ce28 ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" Sep 12 00:18:55.342329 containerd[1593]: 2025-09-12 00:18:55.323 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" Sep 12 00:18:55.342329 containerd[1593]: 2025-09-12 00:18:55.328 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98a4da3c-812d-46bd-ae9f-8908fc1692b4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873", Pod:"coredns-668d6bf9bc-n4r8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db38f8ce28", MAC:"16:2b:e2:7a:c9:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:55.342329 containerd[1593]: 2025-09-12 00:18:55.337 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" Namespace="kube-system" Pod="coredns-668d6bf9bc-n4r8r" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--n4r8r-eth0" Sep 12 00:18:55.381883 containerd[1593]: time="2025-09-12T00:18:55.380580860Z" level=info msg="connecting to shim 810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873" address="unix:///run/containerd/s/cb97364c438dcd4e6947035cda1cf51f5a66c28ed5022f6b1b0c0ea348a40c17" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:55.413304 systemd[1]: Started cri-containerd-810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873.scope - libcontainer container 810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873. Sep 12 00:18:55.430699 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:55.465897 containerd[1593]: time="2025-09-12T00:18:55.465729511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n4r8r,Uid:98a4da3c-812d-46bd-ae9f-8908fc1692b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873\"" Sep 12 00:18:55.466834 kubelet[2742]: E0912 00:18:55.466795 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:55.470133 containerd[1593]: time="2025-09-12T00:18:55.470065840Z" level=info msg="CreateContainer within sandbox \"810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:18:55.488193 containerd[1593]: time="2025-09-12T00:18:55.488135877Z" level=info msg="Container 2f42ae1beb85d5d8106792b36ae2dc7308ef59a31e81b7e47e0bd4992b919350: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:55.500621 containerd[1593]: time="2025-09-12T00:18:55.500568252Z" level=info msg="CreateContainer within sandbox \"810c4a77e5eff4d449ebd59ea971a0e83f59992c610b037b3cbecd5040b76873\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f42ae1beb85d5d8106792b36ae2dc7308ef59a31e81b7e47e0bd4992b919350\"" Sep 12 00:18:55.501247 containerd[1593]: time="2025-09-12T00:18:55.501208063Z" level=info msg="StartContainer for \"2f42ae1beb85d5d8106792b36ae2dc7308ef59a31e81b7e47e0bd4992b919350\"" Sep 12 00:18:55.502367 containerd[1593]: time="2025-09-12T00:18:55.502335349Z" level=info msg="connecting to shim 2f42ae1beb85d5d8106792b36ae2dc7308ef59a31e81b7e47e0bd4992b919350" address="unix:///run/containerd/s/cb97364c438dcd4e6947035cda1cf51f5a66c28ed5022f6b1b0c0ea348a40c17" protocol=ttrpc version=3 Sep 12 00:18:55.536387 systemd[1]: Started cri-containerd-2f42ae1beb85d5d8106792b36ae2dc7308ef59a31e81b7e47e0bd4992b919350.scope - libcontainer container 2f42ae1beb85d5d8106792b36ae2dc7308ef59a31e81b7e47e0bd4992b919350. Sep 12 00:18:55.572088 containerd[1593]: time="2025-09-12T00:18:55.572003083Z" level=info msg="StartContainer for \"2f42ae1beb85d5d8106792b36ae2dc7308ef59a31e81b7e47e0bd4992b919350\" returns successfully" Sep 12 00:18:55.699442 systemd-networkd[1499]: cali85ae123233c: Gained IPv6LL Sep 12 00:18:55.867057 kubelet[2742]: E0912 00:18:55.866795 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:55.867057 kubelet[2742]: E0912 00:18:55.866840 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:55.892430 kubelet[2742]: I0912 00:18:55.892346 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n4r8r" podStartSLOduration=54.891089295 podStartE2EDuration="54.891089295s" podCreationTimestamp="2025-09-12 00:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:18:55.879907547 +0000 UTC m=+60.785470439" watchObservedRunningTime="2025-09-12 00:18:55.891089295 +0000 UTC m=+60.796652187" Sep 12 00:18:56.116868 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:39242.service - OpenSSH per-connection server daemon (10.0.0.1:39242). Sep 12 00:18:56.198555 containerd[1593]: time="2025-09-12T00:18:56.198496140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-vgq56,Uid:1c15d419-f4da-4b74-81c5-c34a123d9cc5,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:56.275325 systemd-networkd[1499]: cali370caae08e2: Gained IPv6LL Sep 12 00:18:56.729796 sshd[4889]: Accepted publickey for core from 10.0.0.1 port 39242 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:18:56.734533 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:56.739788 systemd-logind[1577]: New session 9 of user core. Sep 12 00:18:56.756299 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 00:18:56.869592 kubelet[2742]: E0912 00:18:56.869526 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:56.873079 kubelet[2742]: E0912 00:18:56.872906 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:57.213017 sshd[4893]: Connection closed by 10.0.0.1 port 39242 Sep 12 00:18:57.214986 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:57.224820 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:39242.service: Deactivated successfully. Sep 12 00:18:57.229431 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 00:18:57.231463 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Sep 12 00:18:57.233275 systemd-logind[1577]: Removed session 9. Sep 12 00:18:57.274193 systemd-networkd[1499]: caliba8fe5e9e6f: Link UP Sep 12 00:18:57.274562 systemd-networkd[1499]: caliba8fe5e9e6f: Gained carrier Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.824 [INFO][4897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0 calico-apiserver-7966cf8c7- calico-apiserver 1c15d419-f4da-4b74-81c5-c34a123d9cc5 836 0 2025-09-12 00:18:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7966cf8c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7966cf8c7-vgq56 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba8fe5e9e6f [] [] }} ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.825 [INFO][4897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.877 [INFO][4923] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" HandleID="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Workload="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.878 [INFO][4923] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" HandleID="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Workload="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7966cf8c7-vgq56", "timestamp":"2025-09-12 00:18:56.877806807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.878 [INFO][4923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.878 [INFO][4923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.878 [INFO][4923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.892 [INFO][4923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:56.932 [INFO][4923] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.203 [INFO][4923] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.206 [INFO][4923] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.211 [INFO][4923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.212 [INFO][4923] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.217 [INFO][4923] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06 Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.250 [INFO][4923] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.264 [INFO][4923] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.264 [INFO][4923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" host="localhost" Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.264 [INFO][4923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:57.300003 containerd[1593]: 2025-09-12 00:18:57.264 [INFO][4923] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" HandleID="k8s-pod-network.e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Workload="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" Sep 12 00:18:57.302820 containerd[1593]: 2025-09-12 00:18:57.270 [INFO][4897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0", GenerateName:"calico-apiserver-7966cf8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c15d419-f4da-4b74-81c5-c34a123d9cc5", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7966cf8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7966cf8c7-vgq56", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba8fe5e9e6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:57.302820 containerd[1593]: 2025-09-12 00:18:57.270 [INFO][4897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" Sep 12 00:18:57.302820 containerd[1593]: 2025-09-12 00:18:57.271 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba8fe5e9e6f ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" Sep 12 00:18:57.302820 containerd[1593]: 2025-09-12 00:18:57.274 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" Sep 12 00:18:57.302820 containerd[1593]: 2025-09-12 00:18:57.275 [INFO][4897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0", GenerateName:"calico-apiserver-7966cf8c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c15d419-f4da-4b74-81c5-c34a123d9cc5", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7966cf8c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06", Pod:"calico-apiserver-7966cf8c7-vgq56", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba8fe5e9e6f", MAC:"8e:d5:a7:fc:02:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:57.302820 containerd[1593]: 2025-09-12 00:18:57.290 [INFO][4897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" Namespace="calico-apiserver" Pod="calico-apiserver-7966cf8c7-vgq56" WorkloadEndpoint="localhost-k8s-calico--apiserver--7966cf8c7--vgq56-eth0" Sep 12 00:18:57.363262 systemd-networkd[1499]: cali3db38f8ce28: Gained IPv6LL Sep 12 00:18:57.683757 containerd[1593]: time="2025-09-12T00:18:57.683667274Z" level=info msg="connecting to shim e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06" address="unix:///run/containerd/s/728005898acb6e71a2a6df7841b2237f47d201c51638031cfc4a19e1f80352a0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:57.703047 containerd[1593]: time="2025-09-12T00:18:57.702993064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:57.704130 containerd[1593]: time="2025-09-12T00:18:57.703963075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 00:18:57.705738 containerd[1593]: time="2025-09-12T00:18:57.705715213Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:57.709398 containerd[1593]: time="2025-09-12T00:18:57.709349043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:57.711124 containerd[1593]: time="2025-09-12T00:18:57.711084310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.158647752s" Sep 12 00:18:57.711215 containerd[1593]: time="2025-09-12T00:18:57.711200338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 00:18:57.716732 containerd[1593]: time="2025-09-12T00:18:57.716678159Z" level=info msg="CreateContainer within sandbox \"d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:18:57.724935 systemd[1]: Started cri-containerd-e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06.scope - libcontainer container e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06. Sep 12 00:18:57.732301 containerd[1593]: time="2025-09-12T00:18:57.729149355Z" level=info msg="Container b5ccc1fb58754d401c0101aa4af1ac105d6a443b6b31eb2eaa162a6c818dd02d: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:57.740137 containerd[1593]: time="2025-09-12T00:18:57.739959823Z" level=info msg="CreateContainer within sandbox \"d43e80b0cf381f5e323858fc4b46b8c68b84ac1bc7f3437d3a8c90661ffb51e4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b5ccc1fb58754d401c0101aa4af1ac105d6a443b6b31eb2eaa162a6c818dd02d\"" Sep 12 00:18:57.741283 containerd[1593]: time="2025-09-12T00:18:57.741176167Z" level=info msg="StartContainer for \"b5ccc1fb58754d401c0101aa4af1ac105d6a443b6b31eb2eaa162a6c818dd02d\"" Sep 12 00:18:57.747924 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:57.750087 containerd[1593]: time="2025-09-12T00:18:57.750056003Z" level=info msg="connecting to shim b5ccc1fb58754d401c0101aa4af1ac105d6a443b6b31eb2eaa162a6c818dd02d" address="unix:///run/containerd/s/f314fec17291607769be4588fb150ff33aeebe950cf4a43ca78b80521f793082" protocol=ttrpc version=3 Sep 12 00:18:57.779334 systemd[1]: Started cri-containerd-b5ccc1fb58754d401c0101aa4af1ac105d6a443b6b31eb2eaa162a6c818dd02d.scope - libcontainer container b5ccc1fb58754d401c0101aa4af1ac105d6a443b6b31eb2eaa162a6c818dd02d. Sep 12 00:18:57.867928 containerd[1593]: time="2025-09-12T00:18:57.867730455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7966cf8c7-vgq56,Uid:1c15d419-f4da-4b74-81c5-c34a123d9cc5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06\"" Sep 12 00:18:57.868948 containerd[1593]: time="2025-09-12T00:18:57.868894770Z" level=info msg="StartContainer for \"b5ccc1fb58754d401c0101aa4af1ac105d6a443b6b31eb2eaa162a6c818dd02d\" returns successfully" Sep 12 00:18:57.873153 containerd[1593]: time="2025-09-12T00:18:57.873087690Z" level=info msg="CreateContainer within sandbox \"e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:18:57.881945 kubelet[2742]: E0912 00:18:57.881875 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:57.883094 kubelet[2742]: E0912 00:18:57.882615 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:57.889484 containerd[1593]: time="2025-09-12T00:18:57.889360991Z" level=info msg="Container 7648f96506c817e0b433dba5498ee620c97326d8c0acabce782ab762cbeefb2b: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:57.900323 containerd[1593]: time="2025-09-12T00:18:57.899773644Z" level=info msg="CreateContainer within sandbox \"e3a9c1dfa1bad0ae19301128b582265e0ee803ddd2421665f6c7bda3e7309a06\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7648f96506c817e0b433dba5498ee620c97326d8c0acabce782ab762cbeefb2b\"" Sep 12 00:18:57.901044 containerd[1593]: time="2025-09-12T00:18:57.901000456Z" level=info msg="StartContainer for \"7648f96506c817e0b433dba5498ee620c97326d8c0acabce782ab762cbeefb2b\"" Sep 12 00:18:57.903172 containerd[1593]: time="2025-09-12T00:18:57.903137738Z" level=info msg="connecting to shim 7648f96506c817e0b433dba5498ee620c97326d8c0acabce782ab762cbeefb2b" address="unix:///run/containerd/s/728005898acb6e71a2a6df7841b2237f47d201c51638031cfc4a19e1f80352a0" protocol=ttrpc version=3 Sep 12 00:18:57.905646 kubelet[2742]: I0912 00:18:57.905547 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7966cf8c7-k2jlv" podStartSLOduration=43.744772947 podStartE2EDuration="46.905416134s" podCreationTimestamp="2025-09-12 00:18:11 +0000 UTC" firstStartedPulling="2025-09-12 00:18:54.552167714 +0000 UTC m=+59.457730606" lastFinishedPulling="2025-09-12 00:18:57.712810901 +0000 UTC m=+62.618373793" observedRunningTime="2025-09-12 00:18:57.903550873 +0000 UTC m=+62.809113765" watchObservedRunningTime="2025-09-12 00:18:57.905416134 +0000 UTC m=+62.810979026" Sep 12 00:18:57.935339 systemd[1]: Started cri-containerd-7648f96506c817e0b433dba5498ee620c97326d8c0acabce782ab762cbeefb2b.scope - libcontainer container 7648f96506c817e0b433dba5498ee620c97326d8c0acabce782ab762cbeefb2b. Sep 12 00:18:58.010192 containerd[1593]: time="2025-09-12T00:18:58.010137455Z" level=info msg="StartContainer for \"7648f96506c817e0b433dba5498ee620c97326d8c0acabce782ab762cbeefb2b\" returns successfully" Sep 12 00:18:58.199651 containerd[1593]: time="2025-09-12T00:18:58.199353424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2x6v,Uid:5c37c94c-e43d-4388-9658-2398a6df2ea4,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:58.200053 containerd[1593]: time="2025-09-12T00:18:58.199941308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9cb6fbf4-9jgs9,Uid:e916b2ca-39a3-4524-af0b-67438570f595,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:58.465377 systemd-networkd[1499]: cali070ef8c3e21: Link UP Sep 12 00:18:58.466385 systemd-networkd[1499]: cali070ef8c3e21: Gained carrier Sep 12 00:18:58.771277 systemd-networkd[1499]: caliba8fe5e9e6f: Gained IPv6LL Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.362 [INFO][5083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0 calico-kube-controllers-d9cb6fbf4- calico-system e916b2ca-39a3-4524-af0b-67438570f595 827 0 2025-09-12 00:18:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d9cb6fbf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d9cb6fbf4-9jgs9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali070ef8c3e21 [] [] }} ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.362 [INFO][5083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.407 [INFO][5105] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" HandleID="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Workload="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.408 [INFO][5105] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" HandleID="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Workload="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c22d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d9cb6fbf4-9jgs9", "timestamp":"2025-09-12 00:18:58.407922545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.408 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.408 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.408 [INFO][5105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.416 [INFO][5105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.422 [INFO][5105] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.430 [INFO][5105] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.432 [INFO][5105] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.435 [INFO][5105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.435 [INFO][5105] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.437 [INFO][5105] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058 Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.441 [INFO][5105] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.452 [INFO][5105] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.452 [INFO][5105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" host="localhost" Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.452 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:58.858967 containerd[1593]: 2025-09-12 00:18:58.452 [INFO][5105] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" HandleID="k8s-pod-network.a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Workload="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" Sep 12 00:18:58.860134 containerd[1593]: 2025-09-12 00:18:58.461 [INFO][5083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0", GenerateName:"calico-kube-controllers-d9cb6fbf4-", Namespace:"calico-system", SelfLink:"", UID:"e916b2ca-39a3-4524-af0b-67438570f595", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9cb6fbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d9cb6fbf4-9jgs9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali070ef8c3e21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:58.860134 containerd[1593]: 2025-09-12 00:18:58.461 [INFO][5083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" Sep 12 00:18:58.860134 containerd[1593]: 2025-09-12 00:18:58.461 [INFO][5083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali070ef8c3e21 ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" Sep 12 00:18:58.860134 containerd[1593]: 2025-09-12 00:18:58.466 [INFO][5083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" Sep 12 00:18:58.860134 containerd[1593]: 2025-09-12 00:18:58.466 [INFO][5083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0", GenerateName:"calico-kube-controllers-d9cb6fbf4-", Namespace:"calico-system", SelfLink:"", UID:"e916b2ca-39a3-4524-af0b-67438570f595", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9cb6fbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058", Pod:"calico-kube-controllers-d9cb6fbf4-9jgs9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali070ef8c3e21", MAC:"62:63:b6:c8:28:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:58.860134 containerd[1593]: 2025-09-12 00:18:58.854 [INFO][5083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" Namespace="calico-system" Pod="calico-kube-controllers-d9cb6fbf4-9jgs9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d9cb6fbf4--9jgs9-eth0" Sep 12 00:18:58.928944 kubelet[2742]: I0912 00:18:58.928890 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:18:58.931277 kubelet[2742]: E0912 00:18:58.930037 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:59.421852 kubelet[2742]: I0912 00:18:59.421760 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7966cf8c7-vgq56" podStartSLOduration=48.421742275 podStartE2EDuration="48.421742275s" podCreationTimestamp="2025-09-12 00:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:18:59.421635495 +0000 UTC m=+64.327198387" watchObservedRunningTime="2025-09-12 00:18:59.421742275 +0000 UTC m=+64.327305167" Sep 12 00:18:59.547496 containerd[1593]: time="2025-09-12T00:18:59.547445509Z" level=info msg="connecting to shim a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058" address="unix:///run/containerd/s/79e1d63e91c80268b9804719a39a90c8d1253f2d59dd06ead491dfcaddd7b2b6" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:59.595792 systemd-networkd[1499]: calia7eeffe5c16: Link UP Sep 12 00:18:59.598762 systemd-networkd[1499]: calia7eeffe5c16: Gained carrier Sep 12 00:18:59.634364 systemd[1]: Started cri-containerd-a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058.scope - libcontainer container a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058. Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.374 [INFO][5074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--d2x6v-eth0 csi-node-driver- calico-system 5c37c94c-e43d-4388-9658-2398a6df2ea4 705 0 2025-09-12 00:18:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-d2x6v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia7eeffe5c16 [] [] }} ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.374 [INFO][5074] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-eth0" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.427 [INFO][5113] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" HandleID="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Workload="localhost-k8s-csi--node--driver--d2x6v-eth0" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.427 [INFO][5113] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" HandleID="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Workload="localhost-k8s-csi--node--driver--d2x6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000480b10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-d2x6v", "timestamp":"2025-09-12 00:18:58.427602048 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.428 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.452 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.452 [INFO][5113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.806 [INFO][5113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:58.901 [INFO][5113] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.246 [INFO][5113] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.405 [INFO][5113] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.422 [INFO][5113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.423 [INFO][5113] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.463 [INFO][5113] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.558 [INFO][5113] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.577 [INFO][5113] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.578 [INFO][5113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" host="localhost" Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.578 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:59.635680 containerd[1593]: 2025-09-12 00:18:59.578 [INFO][5113] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" HandleID="k8s-pod-network.b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Workload="localhost-k8s-csi--node--driver--d2x6v-eth0" Sep 12 00:18:59.636246 containerd[1593]: 2025-09-12 00:18:59.585 [INFO][5074] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d2x6v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c37c94c-e43d-4388-9658-2398a6df2ea4", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-d2x6v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7eeffe5c16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:59.636246 containerd[1593]: 2025-09-12 00:18:59.586 [INFO][5074] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-eth0" Sep 12 00:18:59.636246 containerd[1593]: 2025-09-12 00:18:59.586 [INFO][5074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7eeffe5c16 ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-eth0" Sep 12 00:18:59.636246 containerd[1593]: 2025-09-12 00:18:59.601 [INFO][5074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-eth0" Sep 12 00:18:59.636246 containerd[1593]: 2025-09-12 00:18:59.606 [INFO][5074] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--d2x6v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c37c94c-e43d-4388-9658-2398a6df2ea4", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a", Pod:"csi-node-driver-d2x6v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7eeffe5c16", MAC:"36:dd:87:01:bc:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:59.636246 containerd[1593]: 2025-09-12 00:18:59.621 [INFO][5074] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" Namespace="calico-system" Pod="csi-node-driver-d2x6v" WorkloadEndpoint="localhost-k8s-csi--node--driver--d2x6v-eth0" Sep 12 00:18:59.677398 containerd[1593]: time="2025-09-12T00:18:59.677256533Z" level=info msg="connecting to shim b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a" address="unix:///run/containerd/s/3b478c9cea60f6af508f5d1ad3a1919e772cc0c4da350c3de29e0df09d55a4dc" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:59.683489 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:59.709338 systemd[1]: Started cri-containerd-b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a.scope - libcontainer container b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a. Sep 12 00:18:59.725488 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:59.746972 containerd[1593]: time="2025-09-12T00:18:59.746901985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2x6v,Uid:5c37c94c-e43d-4388-9658-2398a6df2ea4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a\"" Sep 12 00:18:59.749429 containerd[1593]: time="2025-09-12T00:18:59.749377631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 00:18:59.756207 containerd[1593]: time="2025-09-12T00:18:59.756118702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9cb6fbf4-9jgs9,Uid:e916b2ca-39a3-4524-af0b-67438570f595,Namespace:calico-system,Attempt:0,} returns sandbox id \"a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058\"" Sep 12 00:19:00.198796 containerd[1593]: time="2025-09-12T00:19:00.198731882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ppnzq,Uid:5e62d2f6-3470-41e9-9111-e097821131f8,Namespace:calico-system,Attempt:0,}" Sep 12 00:19:00.307360 systemd-networkd[1499]: cali070ef8c3e21: Gained IPv6LL Sep 12 00:19:00.458340 systemd-networkd[1499]: calib46feb29276: Link UP Sep 12 00:19:00.460301 systemd-networkd[1499]: calib46feb29276: Gained carrier Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.237 [INFO][5246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--ppnzq-eth0 goldmane-54d579b49d- calico-system 5e62d2f6-3470-41e9-9111-e097821131f8 837 0 2025-09-12 00:18:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-ppnzq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib46feb29276 [] [] }} ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.238 [INFO][5246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.266 [INFO][5260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" HandleID="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Workload="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.266 [INFO][5260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" HandleID="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Workload="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-ppnzq", "timestamp":"2025-09-12 00:19:00.266230832 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.266 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.266 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.266 [INFO][5260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.273 [INFO][5260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.278 [INFO][5260] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.282 [INFO][5260] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.284 [INFO][5260] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.286 [INFO][5260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.286 [INFO][5260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.288 [INFO][5260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840 Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.378 [INFO][5260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.451 [INFO][5260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.451 [INFO][5260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" host="localhost" Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.451 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:19:00.518284 containerd[1593]: 2025-09-12 00:19:00.451 [INFO][5260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" HandleID="k8s-pod-network.df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Workload="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" Sep 12 00:19:00.519044 containerd[1593]: 2025-09-12 00:19:00.455 [INFO][5246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--ppnzq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5e62d2f6-3470-41e9-9111-e097821131f8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-ppnzq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib46feb29276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:19:00.519044 containerd[1593]: 2025-09-12 00:19:00.455 [INFO][5246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" Sep 12 00:19:00.519044 containerd[1593]: 2025-09-12 00:19:00.455 [INFO][5246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib46feb29276 ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" Sep 12 00:19:00.519044 containerd[1593]: 2025-09-12 00:19:00.457 [INFO][5246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" Sep 12 00:19:00.519044 containerd[1593]: 2025-09-12 00:19:00.458 [INFO][5246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--ppnzq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5e62d2f6-3470-41e9-9111-e097821131f8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840", Pod:"goldmane-54d579b49d-ppnzq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib46feb29276", MAC:"22:89:2a:9d:2c:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:19:00.519044 containerd[1593]: 2025-09-12 00:19:00.514 [INFO][5246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" Namespace="calico-system" Pod="goldmane-54d579b49d-ppnzq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--ppnzq-eth0" Sep 12 00:19:00.691371 systemd-networkd[1499]: calia7eeffe5c16: Gained IPv6LL Sep 12 00:19:00.792711 containerd[1593]: time="2025-09-12T00:19:00.792564136Z" level=info msg="connecting to shim df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840" address="unix:///run/containerd/s/50caf5257e0e4167a9a8aac0907b2d10b50d697db9e7ce21c73e3a59e365f202" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:19:00.825288 systemd[1]: Started cri-containerd-df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840.scope - libcontainer container df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840. Sep 12 00:19:00.841294 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:19:00.877297 containerd[1593]: time="2025-09-12T00:19:00.877235860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-ppnzq,Uid:5e62d2f6-3470-41e9-9111-e097821131f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840\"" Sep 12 00:19:01.597560 containerd[1593]: time="2025-09-12T00:19:01.597477914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:01.598468 containerd[1593]: time="2025-09-12T00:19:01.598438004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 00:19:01.600369 containerd[1593]: time="2025-09-12T00:19:01.600286607Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:01.602907 containerd[1593]: time="2025-09-12T00:19:01.602863351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:01.603689 containerd[1593]: time="2025-09-12T00:19:01.603620909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.854205437s" Sep 12 00:19:01.603689 containerd[1593]: time="2025-09-12T00:19:01.603679703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 00:19:01.604676 containerd[1593]: time="2025-09-12T00:19:01.604620166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 00:19:01.608540 containerd[1593]: time="2025-09-12T00:19:01.608486630Z" level=info msg="CreateContainer within sandbox \"b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 00:19:01.620121 containerd[1593]: time="2025-09-12T00:19:01.620064299Z" level=info msg="Container edfeea6d8993f8a80bc0555f0bba954901896d7fb4ab48c88ed3e5cd4ade25bb: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:19:01.631007 containerd[1593]: time="2025-09-12T00:19:01.630962049Z" level=info msg="CreateContainer within sandbox \"b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"edfeea6d8993f8a80bc0555f0bba954901896d7fb4ab48c88ed3e5cd4ade25bb\"" Sep 12 00:19:01.631645 containerd[1593]: time="2025-09-12T00:19:01.631580258Z" level=info msg="StartContainer for \"edfeea6d8993f8a80bc0555f0bba954901896d7fb4ab48c88ed3e5cd4ade25bb\"" Sep 12 00:19:01.633069 containerd[1593]: time="2025-09-12T00:19:01.633026961Z" level=info msg="connecting to shim edfeea6d8993f8a80bc0555f0bba954901896d7fb4ab48c88ed3e5cd4ade25bb" address="unix:///run/containerd/s/3b478c9cea60f6af508f5d1ad3a1919e772cc0c4da350c3de29e0df09d55a4dc" protocol=ttrpc version=3 Sep 12 00:19:01.651322 systemd-networkd[1499]: calib46feb29276: Gained IPv6LL Sep 12 00:19:01.659274 systemd[1]: Started cri-containerd-edfeea6d8993f8a80bc0555f0bba954901896d7fb4ab48c88ed3e5cd4ade25bb.scope - libcontainer container edfeea6d8993f8a80bc0555f0bba954901896d7fb4ab48c88ed3e5cd4ade25bb. Sep 12 00:19:01.902014 containerd[1593]: time="2025-09-12T00:19:01.901885002Z" level=info msg="StartContainer for \"edfeea6d8993f8a80bc0555f0bba954901896d7fb4ab48c88ed3e5cd4ade25bb\" returns successfully" Sep 12 00:19:02.198239 kubelet[2742]: E0912 00:19:02.198118 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:02.228538 systemd[1]: Started sshd@9-10.0.0.88:22-10.0.0.1:55798.service - OpenSSH per-connection server daemon (10.0.0.1:55798). Sep 12 00:19:02.338886 sshd[5362]: Accepted publickey for core from 10.0.0.1 port 55798 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:02.340944 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:02.345894 systemd-logind[1577]: New session 10 of user core. Sep 12 00:19:02.351251 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 00:19:02.695813 sshd[5365]: Connection closed by 10.0.0.1 port 55798 Sep 12 00:19:02.696187 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:02.699446 systemd[1]: sshd@9-10.0.0.88:22-10.0.0.1:55798.service: Deactivated successfully. Sep 12 00:19:02.701585 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 00:19:02.703172 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Sep 12 00:19:02.704995 systemd-logind[1577]: Removed session 10. Sep 12 00:19:06.846725 containerd[1593]: time="2025-09-12T00:19:06.846659641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:06.901562 containerd[1593]: time="2025-09-12T00:19:06.901452019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 00:19:06.904255 containerd[1593]: time="2025-09-12T00:19:06.904158893Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:06.907236 containerd[1593]: time="2025-09-12T00:19:06.907123214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:06.907921 containerd[1593]: time="2025-09-12T00:19:06.907852913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.303184552s" Sep 12 00:19:06.907921 containerd[1593]: time="2025-09-12T00:19:06.907906887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 00:19:06.909249 containerd[1593]: time="2025-09-12T00:19:06.909183851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 00:19:06.923287 containerd[1593]: time="2025-09-12T00:19:06.923222095Z" level=info msg="CreateContainer within sandbox \"a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 00:19:07.119336 containerd[1593]: time="2025-09-12T00:19:07.118137624Z" level=info msg="Container 1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:19:07.173215 containerd[1593]: time="2025-09-12T00:19:07.173161197Z" level=info msg="CreateContainer within sandbox \"a59c335208f1e49a91789baee9b3d2e9a242bb4907712a9f01b2b80ef1d5c058\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\"" Sep 12 00:19:07.173768 containerd[1593]: time="2025-09-12T00:19:07.173719974Z" level=info msg="StartContainer for \"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\"" Sep 12 00:19:07.174963 containerd[1593]: time="2025-09-12T00:19:07.174888277Z" level=info msg="connecting to shim 1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682" address="unix:///run/containerd/s/79e1d63e91c80268b9804719a39a90c8d1253f2d59dd06ead491dfcaddd7b2b6" protocol=ttrpc version=3 Sep 12 00:19:07.199396 systemd[1]: Started cri-containerd-1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682.scope - libcontainer container 1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682. Sep 12 00:19:07.251339 containerd[1593]: time="2025-09-12T00:19:07.251276352Z" level=info msg="StartContainer for \"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" returns successfully" Sep 12 00:19:07.711565 systemd[1]: Started sshd@10-10.0.0.88:22-10.0.0.1:55812.service - OpenSSH per-connection server daemon (10.0.0.1:55812). Sep 12 00:19:07.788591 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 55812 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:07.790953 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:07.795976 systemd-logind[1577]: New session 11 of user core. Sep 12 00:19:07.806330 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 00:19:07.947367 sshd[5435]: Connection closed by 10.0.0.1 port 55812 Sep 12 00:19:07.947762 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:07.957090 systemd[1]: sshd@10-10.0.0.88:22-10.0.0.1:55812.service: Deactivated successfully. Sep 12 00:19:07.959417 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 00:19:07.960399 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Sep 12 00:19:07.964698 systemd[1]: Started sshd@11-10.0.0.88:22-10.0.0.1:55828.service - OpenSSH per-connection server daemon (10.0.0.1:55828). Sep 12 00:19:07.965543 systemd-logind[1577]: Removed session 11. Sep 12 00:19:08.027855 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 55828 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:08.030695 sshd-session[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:08.037343 systemd-logind[1577]: New session 12 of user core. Sep 12 00:19:08.044366 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 00:19:08.081662 containerd[1593]: time="2025-09-12T00:19:08.081599936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"ac909a7286d7d1ca4eca8cc8ae2e046c52ceb5d4ed7887231546925febda8607\" pid:5466 exited_at:{seconds:1757636348 nanos:81329665}" Sep 12 00:19:08.113056 kubelet[2742]: I0912 00:19:08.112944 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d9cb6fbf4-9jgs9" podStartSLOduration=46.961788508 podStartE2EDuration="54.112921795s" podCreationTimestamp="2025-09-12 00:18:14 +0000 UTC" firstStartedPulling="2025-09-12 00:18:59.757717652 +0000 UTC m=+64.663280544" lastFinishedPulling="2025-09-12 00:19:06.908850919 +0000 UTC m=+71.814413831" observedRunningTime="2025-09-12 00:19:08.112657877 +0000 UTC m=+73.018220769" watchObservedRunningTime="2025-09-12 00:19:08.112921795 +0000 UTC m=+73.018484687" Sep 12 00:19:08.198289 kubelet[2742]: E0912 00:19:08.198233 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:09.168516 sshd[5462]: Connection closed by 10.0.0.1 port 55828 Sep 12 00:19:09.169143 sshd-session[5449]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:09.182374 systemd[1]: sshd@11-10.0.0.88:22-10.0.0.1:55828.service: Deactivated successfully. Sep 12 00:19:09.184937 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 00:19:09.185739 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Sep 12 00:19:09.188976 systemd[1]: Started sshd@12-10.0.0.88:22-10.0.0.1:55838.service - OpenSSH per-connection server daemon (10.0.0.1:55838). Sep 12 00:19:09.189665 systemd-logind[1577]: Removed session 12. Sep 12 00:19:09.250836 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 55838 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:09.252480 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:09.257353 systemd-logind[1577]: New session 13 of user core. Sep 12 00:19:09.265259 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 00:19:09.451185 sshd[5495]: Connection closed by 10.0.0.1 port 55838 Sep 12 00:19:09.451820 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:09.458089 systemd[1]: sshd@12-10.0.0.88:22-10.0.0.1:55838.service: Deactivated successfully. Sep 12 00:19:09.460662 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 00:19:09.461548 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Sep 12 00:19:09.463157 systemd-logind[1577]: Removed session 13. Sep 12 00:19:10.385300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305529566.mount: Deactivated successfully. Sep 12 00:19:11.323732 containerd[1593]: time="2025-09-12T00:19:11.323652218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:11.378847 containerd[1593]: time="2025-09-12T00:19:11.378750114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 00:19:11.391111 containerd[1593]: time="2025-09-12T00:19:11.391036461Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:11.423674 containerd[1593]: time="2025-09-12T00:19:11.423620471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:11.424507 containerd[1593]: time="2025-09-12T00:19:11.424475234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.51524262s" Sep 12 00:19:11.424570 containerd[1593]: time="2025-09-12T00:19:11.424511614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 00:19:11.425699 containerd[1593]: time="2025-09-12T00:19:11.425547286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 00:19:11.427121 containerd[1593]: time="2025-09-12T00:19:11.426857255Z" level=info msg="CreateContainer within sandbox \"df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 00:19:11.642397 containerd[1593]: time="2025-09-12T00:19:11.642334154Z" level=info msg="Container ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:19:11.823471 containerd[1593]: time="2025-09-12T00:19:11.823424176Z" level=info msg="CreateContainer within sandbox \"df481ab0b7c2379107ab85784f0c16adc9fa6cb6e6f2b156b746dbb92e595840\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\"" Sep 12 00:19:11.824179 containerd[1593]: time="2025-09-12T00:19:11.823889130Z" level=info msg="StartContainer for \"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\"" Sep 12 00:19:11.825031 containerd[1593]: time="2025-09-12T00:19:11.824976572Z" level=info msg="connecting to shim ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404" address="unix:///run/containerd/s/50caf5257e0e4167a9a8aac0907b2d10b50d697db9e7ce21c73e3a59e365f202" protocol=ttrpc version=3 Sep 12 00:19:11.851317 systemd[1]: Started cri-containerd-ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404.scope - libcontainer container ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404. Sep 12 00:19:11.976125 containerd[1593]: time="2025-09-12T00:19:11.975980316Z" level=info msg="StartContainer for \"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" returns successfully" Sep 12 00:19:12.052620 kubelet[2742]: I0912 00:19:12.050689 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-ppnzq" podStartSLOduration=48.504159256 podStartE2EDuration="59.050666779s" podCreationTimestamp="2025-09-12 00:18:13 +0000 UTC" firstStartedPulling="2025-09-12 00:19:00.878913881 +0000 UTC m=+65.784476773" lastFinishedPulling="2025-09-12 00:19:11.425421404 +0000 UTC m=+76.330984296" observedRunningTime="2025-09-12 00:19:12.049042678 +0000 UTC m=+76.954605570" watchObservedRunningTime="2025-09-12 00:19:12.050666779 +0000 UTC m=+76.956229681" Sep 12 00:19:12.134699 containerd[1593]: time="2025-09-12T00:19:12.134644304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"665da6a5f3f98ddacae9e43d08029b550827312693aa9aaea9e78b899ed746e3\" pid:5567 exit_status:1 exited_at:{seconds:1757636352 nanos:134197656}" Sep 12 00:19:13.118084 containerd[1593]: time="2025-09-12T00:19:13.118027719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"5e6b058d9036d210fa99bd4c1d6bacb8928b33037a7f9f4d721a6453292bc5db\" pid:5594 exit_status:1 exited_at:{seconds:1757636353 nanos:117689378}" Sep 12 00:19:14.216314 containerd[1593]: time="2025-09-12T00:19:14.216249501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:14.217002 containerd[1593]: time="2025-09-12T00:19:14.216971396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 00:19:14.218261 containerd[1593]: time="2025-09-12T00:19:14.218233189Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:14.220576 containerd[1593]: time="2025-09-12T00:19:14.220525368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:19:14.221479 containerd[1593]: time="2025-09-12T00:19:14.220962437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.795386185s" Sep 12 00:19:14.221479 containerd[1593]: time="2025-09-12T00:19:14.220998276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 00:19:14.223926 containerd[1593]: time="2025-09-12T00:19:14.223889254Z" level=info msg="CreateContainer within sandbox \"b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 00:19:14.231687 containerd[1593]: time="2025-09-12T00:19:14.231631194Z" level=info msg="Container 6f64a426ae93d6d91b00afb250c4c2db087b5b7e5c6e8846d1270b4dd642bc39: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:19:14.243179 containerd[1593]: time="2025-09-12T00:19:14.243067064Z" level=info msg="CreateContainer within sandbox \"b67d2f3aef8aadab23cc6c57db00406e6a592040b7acf85e91f36ca20f21dc9a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6f64a426ae93d6d91b00afb250c4c2db087b5b7e5c6e8846d1270b4dd642bc39\"" Sep 12 00:19:14.244041 containerd[1593]: time="2025-09-12T00:19:14.243994193Z" level=info msg="StartContainer for \"6f64a426ae93d6d91b00afb250c4c2db087b5b7e5c6e8846d1270b4dd642bc39\"" Sep 12 00:19:14.245856 containerd[1593]: time="2025-09-12T00:19:14.245802494Z" level=info msg="connecting to shim 6f64a426ae93d6d91b00afb250c4c2db087b5b7e5c6e8846d1270b4dd642bc39" address="unix:///run/containerd/s/3b478c9cea60f6af508f5d1ad3a1919e772cc0c4da350c3de29e0df09d55a4dc" protocol=ttrpc version=3 Sep 12 00:19:14.279367 systemd[1]: Started cri-containerd-6f64a426ae93d6d91b00afb250c4c2db087b5b7e5c6e8846d1270b4dd642bc39.scope - libcontainer container 6f64a426ae93d6d91b00afb250c4c2db087b5b7e5c6e8846d1270b4dd642bc39. Sep 12 00:19:14.332641 containerd[1593]: time="2025-09-12T00:19:14.332572928Z" level=info msg="StartContainer for \"6f64a426ae93d6d91b00afb250c4c2db087b5b7e5c6e8846d1270b4dd642bc39\" returns successfully" Sep 12 00:19:14.469828 systemd[1]: Started sshd@13-10.0.0.88:22-10.0.0.1:35340.service - OpenSSH per-connection server daemon (10.0.0.1:35340). Sep 12 00:19:14.598218 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 35340 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:14.600657 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:14.607303 systemd-logind[1577]: New session 14 of user core. Sep 12 00:19:14.617411 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 00:19:14.788561 sshd[5647]: Connection closed by 10.0.0.1 port 35340 Sep 12 00:19:14.790330 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:14.795569 systemd[1]: sshd@13-10.0.0.88:22-10.0.0.1:35340.service: Deactivated successfully. Sep 12 00:19:14.798437 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 00:19:14.800475 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Sep 12 00:19:14.802089 systemd-logind[1577]: Removed session 14. Sep 12 00:19:15.279276 kubelet[2742]: I0912 00:19:15.279206 2742 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 00:19:15.279276 kubelet[2742]: I0912 00:19:15.279254 2742 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 00:19:16.198426 kubelet[2742]: E0912 00:19:16.198381 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:16.914212 containerd[1593]: time="2025-09-12T00:19:16.914167919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"30d9cd656d30cc016263e1ff9317c962466d03d03f49fc726a00fc38beba98dd\" pid:5673 exited_at:{seconds:1757636356 nanos:913795143}" Sep 12 00:19:16.929506 kubelet[2742]: I0912 00:19:16.929433 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d2x6v" podStartSLOduration=48.456108039 podStartE2EDuration="1m2.929413666s" podCreationTimestamp="2025-09-12 00:18:14 +0000 UTC" firstStartedPulling="2025-09-12 00:18:59.748614208 +0000 UTC m=+64.654177100" lastFinishedPulling="2025-09-12 00:19:14.221919834 +0000 UTC m=+79.127482727" observedRunningTime="2025-09-12 00:19:15.078412186 +0000 UTC m=+79.983975098" watchObservedRunningTime="2025-09-12 00:19:16.929413666 +0000 UTC m=+81.834976558" Sep 12 00:19:19.201239 kubelet[2742]: E0912 00:19:19.201190 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:19.805043 systemd[1]: Started sshd@14-10.0.0.88:22-10.0.0.1:35348.service - OpenSSH per-connection server daemon (10.0.0.1:35348). Sep 12 00:19:19.869034 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 35348 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:19.870596 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:19.874739 systemd-logind[1577]: New session 15 of user core. Sep 12 00:19:19.886257 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 00:19:20.160607 sshd[5690]: Connection closed by 10.0.0.1 port 35348 Sep 12 00:19:20.160959 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:20.165744 systemd[1]: sshd@14-10.0.0.88:22-10.0.0.1:35348.service: Deactivated successfully. Sep 12 00:19:20.168342 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 00:19:20.169352 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Sep 12 00:19:20.171021 systemd-logind[1577]: Removed session 15. Sep 12 00:19:25.175572 systemd[1]: Started sshd@15-10.0.0.88:22-10.0.0.1:50746.service - OpenSSH per-connection server daemon (10.0.0.1:50746). Sep 12 00:19:25.241947 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 50746 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:25.244190 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:25.250210 systemd-logind[1577]: New session 16 of user core. Sep 12 00:19:25.257319 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 00:19:25.378213 sshd[5709]: Connection closed by 10.0.0.1 port 50746 Sep 12 00:19:25.378853 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:25.384094 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Sep 12 00:19:25.384556 systemd[1]: sshd@15-10.0.0.88:22-10.0.0.1:50746.service: Deactivated successfully. Sep 12 00:19:25.387343 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 00:19:25.390415 systemd-logind[1577]: Removed session 16. Sep 12 00:19:25.393579 kubelet[2742]: I0912 00:19:25.393535 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:19:30.394063 systemd[1]: Started sshd@16-10.0.0.88:22-10.0.0.1:40380.service - OpenSSH per-connection server daemon (10.0.0.1:40380). Sep 12 00:19:30.456257 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 40380 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:30.458186 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:30.463221 systemd-logind[1577]: New session 17 of user core. Sep 12 00:19:30.473276 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 00:19:30.599256 sshd[5735]: Connection closed by 10.0.0.1 port 40380 Sep 12 00:19:30.599659 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:30.604912 systemd[1]: sshd@16-10.0.0.88:22-10.0.0.1:40380.service: Deactivated successfully. Sep 12 00:19:30.607626 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 00:19:30.608688 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Sep 12 00:19:30.610601 systemd-logind[1577]: Removed session 17. Sep 12 00:19:34.197751 kubelet[2742]: E0912 00:19:34.197708 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:35.621484 systemd[1]: Started sshd@17-10.0.0.88:22-10.0.0.1:40386.service - OpenSSH per-connection server daemon (10.0.0.1:40386). Sep 12 00:19:35.682359 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 40386 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:35.684517 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:35.689090 systemd-logind[1577]: New session 18 of user core. Sep 12 00:19:35.699278 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 00:19:35.833939 sshd[5754]: Connection closed by 10.0.0.1 port 40386 Sep 12 00:19:35.834357 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:35.839944 systemd[1]: sshd@17-10.0.0.88:22-10.0.0.1:40386.service: Deactivated successfully. Sep 12 00:19:35.842350 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 00:19:35.843256 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Sep 12 00:19:35.844773 systemd-logind[1577]: Removed session 18. Sep 12 00:19:38.080760 containerd[1593]: time="2025-09-12T00:19:38.080706168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"5bc37b2fbe8647275aebf06ad9e54c236860bc00d2f6bd25afcee2ba97bab8bf\" pid:5778 exited_at:{seconds:1757636378 nanos:80366553}" Sep 12 00:19:40.849834 systemd[1]: Started sshd@18-10.0.0.88:22-10.0.0.1:45192.service - OpenSSH per-connection server daemon (10.0.0.1:45192). Sep 12 00:19:40.910492 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 45192 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:40.912243 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:40.917237 systemd-logind[1577]: New session 19 of user core. Sep 12 00:19:40.927275 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 00:19:41.054375 sshd[5794]: Connection closed by 10.0.0.1 port 45192 Sep 12 00:19:41.055020 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:41.063010 systemd[1]: sshd@18-10.0.0.88:22-10.0.0.1:45192.service: Deactivated successfully. Sep 12 00:19:41.066385 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 00:19:41.067883 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Sep 12 00:19:41.071095 systemd-logind[1577]: Removed session 19. Sep 12 00:19:43.139501 containerd[1593]: time="2025-09-12T00:19:43.139452345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"8644100fbb865a2cb2b8fa7bc6e3934f81b01b1689e93a1236aed7549356040c\" pid:5817 exited_at:{seconds:1757636383 nanos:139017469}" Sep 12 00:19:46.070917 systemd[1]: Started sshd@19-10.0.0.88:22-10.0.0.1:45206.service - OpenSSH per-connection server daemon (10.0.0.1:45206). Sep 12 00:19:46.147470 sshd[5831]: Accepted publickey for core from 10.0.0.1 port 45206 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:46.149365 sshd-session[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:46.154564 systemd-logind[1577]: New session 20 of user core. Sep 12 00:19:46.166444 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 00:19:46.321203 sshd[5834]: Connection closed by 10.0.0.1 port 45206 Sep 12 00:19:46.321464 sshd-session[5831]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:46.325767 systemd[1]: sshd@19-10.0.0.88:22-10.0.0.1:45206.service: Deactivated successfully. Sep 12 00:19:46.328046 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 00:19:46.328814 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Sep 12 00:19:46.329969 systemd-logind[1577]: Removed session 20. Sep 12 00:19:46.918464 containerd[1593]: time="2025-09-12T00:19:46.918385899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"74424b1b8e08cdfb0a767dd857fc91d0ac33f757d6abfc69c5452dc0c90c88f6\" pid:5859 exited_at:{seconds:1757636386 nanos:918012881}" Sep 12 00:19:51.348086 systemd[1]: Started sshd@20-10.0.0.88:22-10.0.0.1:49960.service - OpenSSH per-connection server daemon (10.0.0.1:49960). Sep 12 00:19:51.408071 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 49960 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:51.410437 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:51.416915 systemd-logind[1577]: New session 21 of user core. Sep 12 00:19:51.428230 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 00:19:51.581285 sshd[5876]: Connection closed by 10.0.0.1 port 49960 Sep 12 00:19:51.581645 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:51.587610 systemd[1]: sshd@20-10.0.0.88:22-10.0.0.1:49960.service: Deactivated successfully. Sep 12 00:19:51.590206 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 00:19:51.591557 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Sep 12 00:19:51.593951 systemd-logind[1577]: Removed session 21. Sep 12 00:19:54.968356 containerd[1593]: time="2025-09-12T00:19:54.968222033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"2a716dd959f8f0f78c7266a1807b18f4806aeb233f6b2abc17200e1c525f402c\" pid:5901 exited_at:{seconds:1757636394 nanos:967808220}" Sep 12 00:19:56.598435 systemd[1]: Started sshd@21-10.0.0.88:22-10.0.0.1:49976.service - OpenSSH per-connection server daemon (10.0.0.1:49976). Sep 12 00:19:56.662963 sshd[5913]: Accepted publickey for core from 10.0.0.1 port 49976 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:19:56.665124 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:56.669929 systemd-logind[1577]: New session 22 of user core. Sep 12 00:19:56.678284 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 00:19:56.832900 sshd[5916]: Connection closed by 10.0.0.1 port 49976 Sep 12 00:19:56.833407 sshd-session[5913]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:56.838666 systemd[1]: sshd@21-10.0.0.88:22-10.0.0.1:49976.service: Deactivated successfully. Sep 12 00:19:56.840891 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 00:19:56.841825 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Sep 12 00:19:56.843465 systemd-logind[1577]: Removed session 22. Sep 12 00:20:00.198076 kubelet[2742]: E0912 00:20:00.198016 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:01.854142 systemd[1]: Started sshd@22-10.0.0.88:22-10.0.0.1:36218.service - OpenSSH per-connection server daemon (10.0.0.1:36218). Sep 12 00:20:01.939715 sshd[5931]: Accepted publickey for core from 10.0.0.1 port 36218 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:01.941850 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:01.948687 systemd-logind[1577]: New session 23 of user core. Sep 12 00:20:01.958420 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 00:20:02.076687 sshd[5934]: Connection closed by 10.0.0.1 port 36218 Sep 12 00:20:02.077089 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:02.082320 systemd[1]: sshd@22-10.0.0.88:22-10.0.0.1:36218.service: Deactivated successfully. Sep 12 00:20:02.084711 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 00:20:02.085714 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Sep 12 00:20:02.087223 systemd-logind[1577]: Removed session 23. Sep 12 00:20:04.799833 containerd[1593]: time="2025-09-12T00:20:04.799774047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"c7295390c856584f8c25c9f7d4bed23ec0aac0e76037e573e77367c85ee085c4\" pid:5959 exited_at:{seconds:1757636404 nanos:799314508}" Sep 12 00:20:07.092423 systemd[1]: Started sshd@23-10.0.0.88:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). Sep 12 00:20:07.164184 sshd[5971]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:07.166294 sshd-session[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:07.171928 systemd-logind[1577]: New session 24 of user core. Sep 12 00:20:07.184297 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 00:20:07.321223 sshd[5974]: Connection closed by 10.0.0.1 port 36230 Sep 12 00:20:07.321631 sshd-session[5971]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:07.325999 systemd[1]: sshd@23-10.0.0.88:22-10.0.0.1:36230.service: Deactivated successfully. Sep 12 00:20:07.328044 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 00:20:07.328903 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Sep 12 00:20:07.330161 systemd-logind[1577]: Removed session 24. Sep 12 00:20:08.072601 containerd[1593]: time="2025-09-12T00:20:08.072543697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"9b11428abf56af20238757947edf19f1909b88b47b68295e1c9ec68c65a9720b\" pid:5999 exited_at:{seconds:1757636408 nanos:72230065}" Sep 12 00:20:09.198650 kubelet[2742]: E0912 00:20:09.198587 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:11.197835 kubelet[2742]: E0912 00:20:11.197762 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:12.340077 systemd[1]: Started sshd@24-10.0.0.88:22-10.0.0.1:60742.service - OpenSSH per-connection server daemon (10.0.0.1:60742). Sep 12 00:20:12.408252 sshd[6016]: Accepted publickey for core from 10.0.0.1 port 60742 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:12.409894 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:12.414405 systemd-logind[1577]: New session 25 of user core. Sep 12 00:20:12.425264 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 00:20:12.532926 sshd[6019]: Connection closed by 10.0.0.1 port 60742 Sep 12 00:20:12.533342 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:12.537463 systemd[1]: sshd@24-10.0.0.88:22-10.0.0.1:60742.service: Deactivated successfully. Sep 12 00:20:12.539692 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 00:20:12.540622 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Sep 12 00:20:12.541908 systemd-logind[1577]: Removed session 25. Sep 12 00:20:13.117231 containerd[1593]: time="2025-09-12T00:20:13.117170484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"2b0eee4a79970f64fbbbd6cc62df8fd73ea384b511c56fbff9ae7ad2f0900c8a\" pid:6044 exited_at:{seconds:1757636413 nanos:116843257}" Sep 12 00:20:15.198230 kubelet[2742]: E0912 00:20:15.198175 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:16.914825 containerd[1593]: time="2025-09-12T00:20:16.914767675Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"c8e5bee9969492825105b59d1a6070bea71259bc366c0969661e98a5a1c87e06\" pid:6067 exited_at:{seconds:1757636416 nanos:914391295}" Sep 12 00:20:17.198517 kubelet[2742]: E0912 00:20:17.198356 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:17.550121 systemd[1]: Started sshd@25-10.0.0.88:22-10.0.0.1:60744.service - OpenSSH per-connection server daemon (10.0.0.1:60744). Sep 12 00:20:17.628401 sshd[6082]: Accepted publickey for core from 10.0.0.1 port 60744 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:17.630421 sshd-session[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:17.635608 systemd-logind[1577]: New session 26 of user core. Sep 12 00:20:17.642402 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 00:20:18.117339 sshd[6085]: Connection closed by 10.0.0.1 port 60744 Sep 12 00:20:18.117721 sshd-session[6082]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:18.122700 systemd[1]: sshd@25-10.0.0.88:22-10.0.0.1:60744.service: Deactivated successfully. Sep 12 00:20:18.125483 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 00:20:18.126354 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. Sep 12 00:20:18.127851 systemd-logind[1577]: Removed session 26. Sep 12 00:20:23.129799 systemd[1]: Started sshd@26-10.0.0.88:22-10.0.0.1:60294.service - OpenSSH per-connection server daemon (10.0.0.1:60294). Sep 12 00:20:23.192685 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 60294 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:23.194368 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:23.198934 systemd-logind[1577]: New session 27 of user core. Sep 12 00:20:23.206236 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 00:20:23.335128 sshd[6116]: Connection closed by 10.0.0.1 port 60294 Sep 12 00:20:23.335508 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:23.339961 systemd[1]: sshd@26-10.0.0.88:22-10.0.0.1:60294.service: Deactivated successfully. Sep 12 00:20:23.341994 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 00:20:23.342886 systemd-logind[1577]: Session 27 logged out. Waiting for processes to exit. Sep 12 00:20:23.344246 systemd-logind[1577]: Removed session 27. Sep 12 00:20:28.348926 systemd[1]: Started sshd@27-10.0.0.88:22-10.0.0.1:60298.service - OpenSSH per-connection server daemon (10.0.0.1:60298). Sep 12 00:20:28.408245 sshd[6137]: Accepted publickey for core from 10.0.0.1 port 60298 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:28.409800 sshd-session[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:28.414385 systemd-logind[1577]: New session 28 of user core. Sep 12 00:20:28.426235 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 00:20:28.538533 sshd[6141]: Connection closed by 10.0.0.1 port 60298 Sep 12 00:20:28.538860 sshd-session[6137]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:28.544430 systemd[1]: sshd@27-10.0.0.88:22-10.0.0.1:60298.service: Deactivated successfully. Sep 12 00:20:28.546850 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 00:20:28.547801 systemd-logind[1577]: Session 28 logged out. Waiting for processes to exit. Sep 12 00:20:28.549372 systemd-logind[1577]: Removed session 28. Sep 12 00:20:33.560737 systemd[1]: Started sshd@28-10.0.0.88:22-10.0.0.1:51456.service - OpenSSH per-connection server daemon (10.0.0.1:51456). Sep 12 00:20:33.628661 sshd[6157]: Accepted publickey for core from 10.0.0.1 port 51456 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:33.630801 sshd-session[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:33.635917 systemd-logind[1577]: New session 29 of user core. Sep 12 00:20:33.648296 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 00:20:33.796394 sshd[6160]: Connection closed by 10.0.0.1 port 51456 Sep 12 00:20:33.796792 sshd-session[6157]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:33.802192 systemd[1]: sshd@28-10.0.0.88:22-10.0.0.1:51456.service: Deactivated successfully. Sep 12 00:20:33.804284 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 00:20:33.805037 systemd-logind[1577]: Session 29 logged out. Waiting for processes to exit. Sep 12 00:20:33.806976 systemd-logind[1577]: Removed session 29. Sep 12 00:20:38.075646 containerd[1593]: time="2025-09-12T00:20:38.075527487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"65f87ba8be8741b082d4a46bd1ce59c1ccbc69e963cd721ca2d0f548e223c8eb\" pid:6184 exited_at:{seconds:1757636438 nanos:75243672}" Sep 12 00:20:38.811004 systemd[1]: Started sshd@29-10.0.0.88:22-10.0.0.1:51470.service - OpenSSH per-connection server daemon (10.0.0.1:51470). Sep 12 00:20:38.874077 sshd[6195]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:38.875695 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:38.880900 systemd-logind[1577]: New session 30 of user core. Sep 12 00:20:38.892252 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 00:20:39.006042 sshd[6198]: Connection closed by 10.0.0.1 port 51470 Sep 12 00:20:39.006434 sshd-session[6195]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:39.010901 systemd[1]: sshd@29-10.0.0.88:22-10.0.0.1:51470.service: Deactivated successfully. Sep 12 00:20:39.013067 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 00:20:39.014045 systemd-logind[1577]: Session 30 logged out. Waiting for processes to exit. Sep 12 00:20:39.015526 systemd-logind[1577]: Removed session 30. Sep 12 00:20:43.122287 containerd[1593]: time="2025-09-12T00:20:43.122230730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"c8577c19b2d958b526ce21afa8f620259cefc29d31f6476f5718b6dc970f6f3e\" pid:6223 exited_at:{seconds:1757636443 nanos:121825566}" Sep 12 00:20:44.022423 systemd[1]: Started sshd@30-10.0.0.88:22-10.0.0.1:42318.service - OpenSSH per-connection server daemon (10.0.0.1:42318). Sep 12 00:20:44.085405 sshd[6236]: Accepted publickey for core from 10.0.0.1 port 42318 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:44.086831 sshd-session[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:44.090977 systemd-logind[1577]: New session 31 of user core. Sep 12 00:20:44.104340 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 12 00:20:44.226747 sshd[6239]: Connection closed by 10.0.0.1 port 42318 Sep 12 00:20:44.227183 sshd-session[6236]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:44.232506 systemd[1]: sshd@30-10.0.0.88:22-10.0.0.1:42318.service: Deactivated successfully. Sep 12 00:20:44.235078 systemd[1]: session-31.scope: Deactivated successfully. Sep 12 00:20:44.235945 systemd-logind[1577]: Session 31 logged out. Waiting for processes to exit. Sep 12 00:20:44.237762 systemd-logind[1577]: Removed session 31. Sep 12 00:20:46.912707 containerd[1593]: time="2025-09-12T00:20:46.912639915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"a76c6634b56dd9971479d049b309fc53291963e5e8b1c90b4cd990c9d89f8ec2\" pid:6263 exited_at:{seconds:1757636446 nanos:912300095}" Sep 12 00:20:49.198680 kubelet[2742]: E0912 00:20:49.198620 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:49.248701 systemd[1]: Started sshd@31-10.0.0.88:22-10.0.0.1:42326.service - OpenSSH per-connection server daemon (10.0.0.1:42326). Sep 12 00:20:49.317755 sshd[6276]: Accepted publickey for core from 10.0.0.1 port 42326 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:49.319628 sshd-session[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:49.324288 systemd-logind[1577]: New session 32 of user core. Sep 12 00:20:49.335259 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 12 00:20:49.471688 sshd[6279]: Connection closed by 10.0.0.1 port 42326 Sep 12 00:20:49.471976 sshd-session[6276]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:49.476341 systemd[1]: sshd@31-10.0.0.88:22-10.0.0.1:42326.service: Deactivated successfully. Sep 12 00:20:49.478542 systemd[1]: session-32.scope: Deactivated successfully. Sep 12 00:20:49.479556 systemd-logind[1577]: Session 32 logged out. Waiting for processes to exit. Sep 12 00:20:49.480865 systemd-logind[1577]: Removed session 32. Sep 12 00:20:54.487384 systemd[1]: Started sshd@32-10.0.0.88:22-10.0.0.1:35380.service - OpenSSH per-connection server daemon (10.0.0.1:35380). Sep 12 00:20:54.547803 sshd[6293]: Accepted publickey for core from 10.0.0.1 port 35380 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:54.549444 sshd-session[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:54.554025 systemd-logind[1577]: New session 33 of user core. Sep 12 00:20:54.561249 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 12 00:20:54.685449 sshd[6296]: Connection closed by 10.0.0.1 port 35380 Sep 12 00:20:54.685856 sshd-session[6293]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:54.690708 systemd[1]: sshd@32-10.0.0.88:22-10.0.0.1:35380.service: Deactivated successfully. Sep 12 00:20:54.693053 systemd[1]: session-33.scope: Deactivated successfully. Sep 12 00:20:54.694050 systemd-logind[1577]: Session 33 logged out. Waiting for processes to exit. Sep 12 00:20:54.695721 systemd-logind[1577]: Removed session 33. Sep 12 00:20:54.965572 containerd[1593]: time="2025-09-12T00:20:54.965525308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"1b1b671f80553b1f518e29d0fc2c95d07aa2f0df8ea5ce34a4310c33281e2dda\" pid:6321 exited_at:{seconds:1757636454 nanos:965088335}" Sep 12 00:20:59.699596 systemd[1]: Started sshd@33-10.0.0.88:22-10.0.0.1:35386.service - OpenSSH per-connection server daemon (10.0.0.1:35386). Sep 12 00:20:59.768923 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 35386 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:20:59.772317 sshd-session[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:59.777665 systemd-logind[1577]: New session 34 of user core. Sep 12 00:20:59.791485 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 12 00:20:59.959232 sshd[6337]: Connection closed by 10.0.0.1 port 35386 Sep 12 00:20:59.959472 sshd-session[6334]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:59.965176 systemd[1]: sshd@33-10.0.0.88:22-10.0.0.1:35386.service: Deactivated successfully. Sep 12 00:20:59.967332 systemd[1]: session-34.scope: Deactivated successfully. Sep 12 00:20:59.968144 systemd-logind[1577]: Session 34 logged out. Waiting for processes to exit. Sep 12 00:20:59.969677 systemd-logind[1577]: Removed session 34. Sep 12 00:21:03.198325 kubelet[2742]: E0912 00:21:03.198283 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:04.789143 containerd[1593]: time="2025-09-12T00:21:04.789082316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"0a68a8db45f7ef29813257c9498e1e9dbe8db5ad96c84322937617bc8e23ef84\" pid:6366 exited_at:{seconds:1757636464 nanos:788761932}" Sep 12 00:21:04.971135 systemd[1]: Started sshd@34-10.0.0.88:22-10.0.0.1:35306.service - OpenSSH per-connection server daemon (10.0.0.1:35306). Sep 12 00:21:05.027685 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 35306 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:05.029415 sshd-session[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:05.034216 systemd-logind[1577]: New session 35 of user core. Sep 12 00:21:05.041254 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 12 00:21:05.154747 sshd[6382]: Connection closed by 10.0.0.1 port 35306 Sep 12 00:21:05.155128 sshd-session[6379]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:05.158516 systemd[1]: sshd@34-10.0.0.88:22-10.0.0.1:35306.service: Deactivated successfully. Sep 12 00:21:05.160791 systemd[1]: session-35.scope: Deactivated successfully. Sep 12 00:21:05.162484 systemd-logind[1577]: Session 35 logged out. Waiting for processes to exit. Sep 12 00:21:05.164009 systemd-logind[1577]: Removed session 35. Sep 12 00:21:08.077490 containerd[1593]: time="2025-09-12T00:21:08.077435628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"b9db6a148cd544ac4240921ebe2dee1b77a4c8541882c17d0f1f99d247ad3a2d\" pid:6406 exited_at:{seconds:1757636468 nanos:77071702}" Sep 12 00:21:10.177335 systemd[1]: Started sshd@35-10.0.0.88:22-10.0.0.1:36028.service - OpenSSH per-connection server daemon (10.0.0.1:36028). Sep 12 00:21:10.265394 sshd[6417]: Accepted publickey for core from 10.0.0.1 port 36028 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:10.267788 sshd-session[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:10.273451 systemd-logind[1577]: New session 36 of user core. Sep 12 00:21:10.281334 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 12 00:21:10.437410 sshd[6420]: Connection closed by 10.0.0.1 port 36028 Sep 12 00:21:10.437754 sshd-session[6417]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:10.443338 systemd[1]: sshd@35-10.0.0.88:22-10.0.0.1:36028.service: Deactivated successfully. Sep 12 00:21:10.445605 systemd[1]: session-36.scope: Deactivated successfully. Sep 12 00:21:10.446552 systemd-logind[1577]: Session 36 logged out. Waiting for processes to exit. Sep 12 00:21:10.448016 systemd-logind[1577]: Removed session 36. Sep 12 00:21:13.129469 containerd[1593]: time="2025-09-12T00:21:13.129416928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"1df7615f76af5914f74620131a48507d437b216f5e58c59043f689ee299cceae\" pid:6449 exited_at:{seconds:1757636473 nanos:128992521}" Sep 12 00:21:15.454463 systemd[1]: Started sshd@36-10.0.0.88:22-10.0.0.1:36034.service - OpenSSH per-connection server daemon (10.0.0.1:36034). Sep 12 00:21:15.516467 sshd[6463]: Accepted publickey for core from 10.0.0.1 port 36034 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:15.518555 sshd-session[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:15.523647 systemd-logind[1577]: New session 37 of user core. Sep 12 00:21:15.530333 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 12 00:21:15.652868 sshd[6466]: Connection closed by 10.0.0.1 port 36034 Sep 12 00:21:15.653296 sshd-session[6463]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:15.658210 systemd[1]: sshd@36-10.0.0.88:22-10.0.0.1:36034.service: Deactivated successfully. Sep 12 00:21:15.660764 systemd[1]: session-37.scope: Deactivated successfully. Sep 12 00:21:15.661634 systemd-logind[1577]: Session 37 logged out. Waiting for processes to exit. Sep 12 00:21:15.663029 systemd-logind[1577]: Removed session 37. Sep 12 00:21:16.945710 containerd[1593]: time="2025-09-12T00:21:16.945652869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"0f2f921af32e74158de18432f1b23d79de31cd19ec7f99e8b8b34f5257cf81b2\" pid:6490 exit_status:1 exited_at:{seconds:1757636476 nanos:945256015}" Sep 12 00:21:20.672569 systemd[1]: Started sshd@37-10.0.0.88:22-10.0.0.1:55626.service - OpenSSH per-connection server daemon (10.0.0.1:55626). Sep 12 00:21:20.795897 sshd[6504]: Accepted publickey for core from 10.0.0.1 port 55626 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:20.798294 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:20.804173 systemd-logind[1577]: New session 38 of user core. Sep 12 00:21:20.810611 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 12 00:21:21.073890 sshd[6507]: Connection closed by 10.0.0.1 port 55626 Sep 12 00:21:21.074482 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:21.080522 systemd[1]: sshd@37-10.0.0.88:22-10.0.0.1:55626.service: Deactivated successfully. Sep 12 00:21:21.082985 systemd[1]: session-38.scope: Deactivated successfully. Sep 12 00:21:21.084084 systemd-logind[1577]: Session 38 logged out. Waiting for processes to exit. Sep 12 00:21:21.085600 systemd-logind[1577]: Removed session 38. Sep 12 00:21:22.197732 kubelet[2742]: E0912 00:21:22.197661 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:26.087547 systemd[1]: Started sshd@38-10.0.0.88:22-10.0.0.1:55636.service - OpenSSH per-connection server daemon (10.0.0.1:55636). Sep 12 00:21:26.177194 sshd[6520]: Accepted publickey for core from 10.0.0.1 port 55636 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:26.185299 sshd-session[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:26.259788 systemd-logind[1577]: New session 39 of user core. Sep 12 00:21:26.276426 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 12 00:21:26.518938 sshd[6523]: Connection closed by 10.0.0.1 port 55636 Sep 12 00:21:26.520532 sshd-session[6520]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:26.536365 systemd[1]: sshd@38-10.0.0.88:22-10.0.0.1:55636.service: Deactivated successfully. Sep 12 00:21:26.545734 systemd[1]: session-39.scope: Deactivated successfully. Sep 12 00:21:26.550296 systemd-logind[1577]: Session 39 logged out. Waiting for processes to exit. Sep 12 00:21:26.557030 systemd-logind[1577]: Removed session 39. Sep 12 00:21:27.199139 kubelet[2742]: E0912 00:21:27.198331 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:28.197941 kubelet[2742]: E0912 00:21:28.197878 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:31.530426 systemd[1]: Started sshd@39-10.0.0.88:22-10.0.0.1:43698.service - OpenSSH per-connection server daemon (10.0.0.1:43698). Sep 12 00:21:31.588054 sshd[6542]: Accepted publickey for core from 10.0.0.1 port 43698 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:31.589810 sshd-session[6542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:31.594198 systemd-logind[1577]: New session 40 of user core. Sep 12 00:21:31.608248 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 12 00:21:31.722585 sshd[6545]: Connection closed by 10.0.0.1 port 43698 Sep 12 00:21:31.722964 sshd-session[6542]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:31.737067 systemd[1]: sshd@39-10.0.0.88:22-10.0.0.1:43698.service: Deactivated successfully. Sep 12 00:21:31.739229 systemd[1]: session-40.scope: Deactivated successfully. Sep 12 00:21:31.740241 systemd-logind[1577]: Session 40 logged out. Waiting for processes to exit. Sep 12 00:21:31.744407 systemd[1]: Started sshd@40-10.0.0.88:22-10.0.0.1:43706.service - OpenSSH per-connection server daemon (10.0.0.1:43706). Sep 12 00:21:31.745342 systemd-logind[1577]: Removed session 40. Sep 12 00:21:31.813203 sshd[6558]: Accepted publickey for core from 10.0.0.1 port 43706 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:31.814836 sshd-session[6558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:31.820568 systemd-logind[1577]: New session 41 of user core. Sep 12 00:21:31.831305 systemd[1]: Started session-41.scope - Session 41 of User core. Sep 12 00:21:32.211401 sshd[6561]: Connection closed by 10.0.0.1 port 43706 Sep 12 00:21:32.211858 sshd-session[6558]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:32.223342 systemd[1]: sshd@40-10.0.0.88:22-10.0.0.1:43706.service: Deactivated successfully. Sep 12 00:21:32.226065 systemd[1]: session-41.scope: Deactivated successfully. Sep 12 00:21:32.227285 systemd-logind[1577]: Session 41 logged out. Waiting for processes to exit. Sep 12 00:21:32.231689 systemd[1]: Started sshd@41-10.0.0.88:22-10.0.0.1:43718.service - OpenSSH per-connection server daemon (10.0.0.1:43718). Sep 12 00:21:32.232679 systemd-logind[1577]: Removed session 41. Sep 12 00:21:32.316276 sshd[6573]: Accepted publickey for core from 10.0.0.1 port 43718 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:32.318309 sshd-session[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:32.324068 systemd-logind[1577]: New session 42 of user core. Sep 12 00:21:32.335352 systemd[1]: Started session-42.scope - Session 42 of User core. Sep 12 00:21:32.979450 sshd[6576]: Connection closed by 10.0.0.1 port 43718 Sep 12 00:21:32.980330 sshd-session[6573]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:32.990929 systemd[1]: sshd@41-10.0.0.88:22-10.0.0.1:43718.service: Deactivated successfully. Sep 12 00:21:32.996678 systemd[1]: session-42.scope: Deactivated successfully. Sep 12 00:21:32.998653 systemd-logind[1577]: Session 42 logged out. Waiting for processes to exit. Sep 12 00:21:33.003547 systemd[1]: Started sshd@42-10.0.0.88:22-10.0.0.1:43720.service - OpenSSH per-connection server daemon (10.0.0.1:43720). Sep 12 00:21:33.005358 systemd-logind[1577]: Removed session 42. Sep 12 00:21:33.065019 sshd[6597]: Accepted publickey for core from 10.0.0.1 port 43720 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:33.066973 sshd-session[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:33.071889 systemd-logind[1577]: New session 43 of user core. Sep 12 00:21:33.084281 systemd[1]: Started session-43.scope - Session 43 of User core. Sep 12 00:21:33.492542 sshd[6600]: Connection closed by 10.0.0.1 port 43720 Sep 12 00:21:33.492931 sshd-session[6597]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:33.504514 systemd[1]: sshd@42-10.0.0.88:22-10.0.0.1:43720.service: Deactivated successfully. Sep 12 00:21:33.506769 systemd[1]: session-43.scope: Deactivated successfully. Sep 12 00:21:33.507723 systemd-logind[1577]: Session 43 logged out. Waiting for processes to exit. Sep 12 00:21:33.511694 systemd[1]: Started sshd@43-10.0.0.88:22-10.0.0.1:43732.service - OpenSSH per-connection server daemon (10.0.0.1:43732). Sep 12 00:21:33.513231 systemd-logind[1577]: Removed session 43. Sep 12 00:21:33.577566 sshd[6611]: Accepted publickey for core from 10.0.0.1 port 43732 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:33.579339 sshd-session[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:33.584619 systemd-logind[1577]: New session 44 of user core. Sep 12 00:21:33.594343 systemd[1]: Started session-44.scope - Session 44 of User core. Sep 12 00:21:33.722683 sshd[6614]: Connection closed by 10.0.0.1 port 43732 Sep 12 00:21:33.723172 sshd-session[6611]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:33.731056 systemd[1]: sshd@43-10.0.0.88:22-10.0.0.1:43732.service: Deactivated successfully. Sep 12 00:21:33.733851 systemd[1]: session-44.scope: Deactivated successfully. Sep 12 00:21:33.734962 systemd-logind[1577]: Session 44 logged out. Waiting for processes to exit. Sep 12 00:21:33.737542 systemd-logind[1577]: Removed session 44. Sep 12 00:21:34.198311 kubelet[2742]: E0912 00:21:34.198264 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:38.078384 containerd[1593]: time="2025-09-12T00:21:38.078328319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"f48e65d3954eb01996a909a65a0863fdacce1a641959b62b9d981e4439e07b8b\" pid:6639 exited_at:{seconds:1757636498 nanos:77846106}" Sep 12 00:21:38.739440 systemd[1]: Started sshd@44-10.0.0.88:22-10.0.0.1:43742.service - OpenSSH per-connection server daemon (10.0.0.1:43742). Sep 12 00:21:38.795369 sshd[6649]: Accepted publickey for core from 10.0.0.1 port 43742 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:38.797217 sshd-session[6649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:38.801978 systemd-logind[1577]: New session 45 of user core. Sep 12 00:21:38.812313 systemd[1]: Started session-45.scope - Session 45 of User core. Sep 12 00:21:38.940674 sshd[6652]: Connection closed by 10.0.0.1 port 43742 Sep 12 00:21:38.941040 sshd-session[6649]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:38.946171 systemd[1]: sshd@44-10.0.0.88:22-10.0.0.1:43742.service: Deactivated successfully. Sep 12 00:21:38.948444 systemd[1]: session-45.scope: Deactivated successfully. Sep 12 00:21:38.949243 systemd-logind[1577]: Session 45 logged out. Waiting for processes to exit. Sep 12 00:21:38.951312 systemd-logind[1577]: Removed session 45. Sep 12 00:21:39.198859 kubelet[2742]: E0912 00:21:39.198780 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:43.141686 containerd[1593]: time="2025-09-12T00:21:43.141627869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba8dbf03e9e3dc026f9fb1d750ae239651c47369c0d0a181843980ffe4e84404\" id:\"75139d47888300d07c89b699876bae3768dfc8078a339698118b67164a174ca2\" pid:6680 exited_at:{seconds:1757636503 nanos:141281176}" Sep 12 00:21:43.954674 systemd[1]: Started sshd@45-10.0.0.88:22-10.0.0.1:53882.service - OpenSSH per-connection server daemon (10.0.0.1:53882). Sep 12 00:21:44.012873 sshd[6692]: Accepted publickey for core from 10.0.0.1 port 53882 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:44.014728 sshd-session[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:44.019130 systemd-logind[1577]: New session 46 of user core. Sep 12 00:21:44.028348 systemd[1]: Started session-46.scope - Session 46 of User core. Sep 12 00:21:44.172192 sshd[6695]: Connection closed by 10.0.0.1 port 53882 Sep 12 00:21:44.172679 sshd-session[6692]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:44.178903 systemd[1]: sshd@45-10.0.0.88:22-10.0.0.1:53882.service: Deactivated successfully. Sep 12 00:21:44.181322 systemd[1]: session-46.scope: Deactivated successfully. Sep 12 00:21:44.182329 systemd-logind[1577]: Session 46 logged out. Waiting for processes to exit. Sep 12 00:21:44.184299 systemd-logind[1577]: Removed session 46. Sep 12 00:21:46.922164 containerd[1593]: time="2025-09-12T00:21:46.922090061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b9ac2f8a9c8b7b971c760d6740ecd9cf5799c5caec6de9cac7c471c84333fa3\" id:\"be07f3cfea2167bbf0f04d866c38b6f991224a5ef290138177ae2f649fd841ef\" pid:6719 exited_at:{seconds:1757636506 nanos:921711698}" Sep 12 00:21:49.185164 systemd[1]: Started sshd@46-10.0.0.88:22-10.0.0.1:53898.service - OpenSSH per-connection server daemon (10.0.0.1:53898). Sep 12 00:21:49.253593 sshd[6733]: Accepted publickey for core from 10.0.0.1 port 53898 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:49.255310 sshd-session[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:49.259941 systemd-logind[1577]: New session 47 of user core. Sep 12 00:21:49.265227 systemd[1]: Started session-47.scope - Session 47 of User core. Sep 12 00:21:49.383625 sshd[6736]: Connection closed by 10.0.0.1 port 53898 Sep 12 00:21:49.385019 sshd-session[6733]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:49.390326 systemd[1]: sshd@46-10.0.0.88:22-10.0.0.1:53898.service: Deactivated successfully. Sep 12 00:21:49.392867 systemd[1]: session-47.scope: Deactivated successfully. Sep 12 00:21:49.393770 systemd-logind[1577]: Session 47 logged out. Waiting for processes to exit. Sep 12 00:21:49.395063 systemd-logind[1577]: Removed session 47. Sep 12 00:21:51.198671 kubelet[2742]: E0912 00:21:51.198596 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:54.402030 systemd[1]: Started sshd@47-10.0.0.88:22-10.0.0.1:38804.service - OpenSSH per-connection server daemon (10.0.0.1:38804). Sep 12 00:21:54.472845 sshd[6762]: Accepted publickey for core from 10.0.0.1 port 38804 ssh2: RSA SHA256:hjy8mXSQ+/WB783B55QBOlAcn0PbC3w2/KN+ZZuJgDA Sep 12 00:21:54.475066 sshd-session[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:54.480247 systemd-logind[1577]: New session 48 of user core. Sep 12 00:21:54.488469 systemd[1]: Started session-48.scope - Session 48 of User core. Sep 12 00:21:54.705719 sshd[6765]: Connection closed by 10.0.0.1 port 38804 Sep 12 00:21:54.708069 sshd-session[6762]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:54.714143 systemd-logind[1577]: Session 48 logged out. Waiting for processes to exit. Sep 12 00:21:54.714524 systemd[1]: sshd@47-10.0.0.88:22-10.0.0.1:38804.service: Deactivated successfully. Sep 12 00:21:54.716717 systemd[1]: session-48.scope: Deactivated successfully. Sep 12 00:21:54.718923 systemd-logind[1577]: Removed session 48. Sep 12 00:21:54.968748 containerd[1593]: time="2025-09-12T00:21:54.968610005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de28e58a8e1224db2a960cf3742f6c8071f167ee978ebca42b6e4e0cbda5682\" id:\"1fcebc622f50efbf9f017672bdd3d1526abf7b639fe0a0deb3b9855976fbb360\" pid:6790 exited_at:{seconds:1757636514 nanos:968250479}"