Sep 11 00:16:22.871982 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:15:45 -00 2025 Sep 11 00:16:22.872005 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:16:22.872018 kernel: BIOS-provided physical RAM map: Sep 11 00:16:22.872032 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:16:22.872042 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:16:22.872051 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:16:22.872061 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:16:22.872070 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:16:22.872084 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:16:22.872098 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:16:22.872107 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 11 00:16:22.872116 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:16:22.872124 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:16:22.872136 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:16:22.872152 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:16:22.872167 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:16:22.872218 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:16:22.872254 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:16:22.872275 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:16:22.872295 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:16:22.872324 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:16:22.872334 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:16:22.872343 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:16:22.872352 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:16:22.872361 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:16:22.872387 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:16:22.872398 kernel: NX (Execute Disable) protection: active Sep 11 00:16:22.872426 kernel: APIC: Static calls initialized Sep 11 00:16:22.872436 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 11 00:16:22.872445 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 11 00:16:22.872455 kernel: extended physical RAM map: Sep 11 00:16:22.872464 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 11 00:16:22.872473 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 11 00:16:22.872482 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 11 00:16:22.872492 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 11 00:16:22.872501 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 11 00:16:22.872515 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 11 00:16:22.872524 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 11 00:16:22.872534 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 11 00:16:22.872542 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 11 00:16:22.872553 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 11 00:16:22.872569 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 11 00:16:22.872579 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 11 00:16:22.872586 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 11 00:16:22.872594 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 11 00:16:22.872601 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 11 00:16:22.872609 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 11 00:16:22.872616 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 11 00:16:22.872624 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 11 00:16:22.872632 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 11 00:16:22.872640 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 11 00:16:22.872648 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 11 00:16:22.872657 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 11 00:16:22.872665 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 11 00:16:22.872672 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:16:22.872680 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:16:22.872687 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 11 00:16:22.872694 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:16:22.872706 kernel: efi: EFI v2.7 by EDK II Sep 11 00:16:22.872713 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 11 00:16:22.872721 kernel: random: crng init done Sep 11 00:16:22.872731 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 11 00:16:22.872738 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 11 00:16:22.872750 kernel: secureboot: Secure boot disabled Sep 11 00:16:22.872757 kernel: SMBIOS 2.8 present. Sep 11 00:16:22.872765 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 11 00:16:22.872773 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:16:22.872780 kernel: Hypervisor detected: KVM Sep 11 00:16:22.872787 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:16:22.872795 kernel: kvm-clock: using sched offset of 5798903445 cycles Sep 11 00:16:22.872803 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:16:22.872811 kernel: tsc: Detected 2794.748 MHz processor Sep 11 00:16:22.872819 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:16:22.872827 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:16:22.872836 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 11 00:16:22.872844 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:16:22.872852 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:16:22.872859 kernel: Using GB pages for direct mapping Sep 11 00:16:22.872867 kernel: ACPI: Early table checksum verification disabled Sep 11 00:16:22.872875 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 11 00:16:22.872883 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:16:22.872890 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:16:22.872898 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:16:22.872908 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 11 00:16:22.872916 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:16:22.872923 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:16:22.872931 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:16:22.872939 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:16:22.872946 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 11 00:16:22.872954 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 11 00:16:22.872962 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 11 00:16:22.872971 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 11 00:16:22.872979 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 11 00:16:22.872986 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 11 00:16:22.872994 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 11 00:16:22.873002 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 11 00:16:22.873009 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 11 00:16:22.873017 kernel: No NUMA configuration found Sep 11 00:16:22.873024 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 11 00:16:22.873032 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 11 00:16:22.873039 kernel: Zone ranges: Sep 11 00:16:22.873049 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:16:22.873057 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 11 00:16:22.873064 kernel: Normal empty Sep 11 00:16:22.873072 kernel: Device empty Sep 11 00:16:22.873079 kernel: Movable zone start for each node Sep 11 00:16:22.873087 kernel: Early memory node ranges Sep 11 00:16:22.873094 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 11 00:16:22.873104 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 11 00:16:22.873124 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 11 00:16:22.873140 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 11 00:16:22.873148 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 11 00:16:22.873155 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 11 00:16:22.873163 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 11 00:16:22.873170 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 11 00:16:22.873178 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 11 00:16:22.873186 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:16:22.873196 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 11 00:16:22.873229 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 11 00:16:22.873237 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:16:22.873245 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 11 00:16:22.873253 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 11 00:16:22.873263 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 11 00:16:22.873271 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 11 00:16:22.873279 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 11 00:16:22.873287 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:16:22.873295 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:16:22.873305 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:16:22.873313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:16:22.873321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:16:22.873329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:16:22.873337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:16:22.873345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:16:22.873353 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:16:22.873361 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:16:22.873369 kernel: TSC deadline timer available Sep 11 00:16:22.873378 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:16:22.873391 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:16:22.873399 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:16:22.873410 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:16:22.873430 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:16:22.873445 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:16:22.873460 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:16:22.873468 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:16:22.873476 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:16:22.873486 kernel: kvm-guest: setup PV sched yield Sep 11 00:16:22.873494 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 11 00:16:22.873502 kernel: Booting paravirtualized kernel on KVM Sep 11 00:16:22.873510 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:16:22.873518 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:16:22.873526 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:16:22.873534 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:16:22.873542 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:16:22.873550 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:16:22.873567 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:16:22.873577 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:16:22.873588 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:16:22.873596 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:16:22.873604 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:16:22.873612 kernel: Fallback order for Node 0: 0 Sep 11 00:16:22.873620 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 11 00:16:22.873628 kernel: Policy zone: DMA32 Sep 11 00:16:22.873637 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:16:22.873649 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:16:22.873660 kernel: ftrace: allocating 40106 entries in 157 pages Sep 11 00:16:22.873671 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:16:22.873682 kernel: Dynamic Preempt: voluntary Sep 11 00:16:22.873692 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:16:22.873704 kernel: rcu: RCU event tracing is enabled. Sep 11 00:16:22.873712 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:16:22.873732 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:16:22.873742 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:16:22.873757 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:16:22.873780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:16:22.873794 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:16:22.873806 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:16:22.873817 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:16:22.873827 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:16:22.873838 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:16:22.873849 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:16:22.873860 kernel: Console: colour dummy device 80x25 Sep 11 00:16:22.873876 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:16:22.873887 kernel: ACPI: Core revision 20240827 Sep 11 00:16:22.873898 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:16:22.873909 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:16:22.873919 kernel: x2apic enabled Sep 11 00:16:22.873930 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:16:22.873941 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:16:22.873952 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:16:22.873961 kernel: kvm-guest: setup PV IPIs Sep 11 00:16:22.873972 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:16:22.873980 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:16:22.873989 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 11 00:16:22.873997 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:16:22.874005 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:16:22.874013 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:16:22.874021 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:16:22.874029 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:16:22.874039 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:16:22.874047 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:16:22.874055 kernel: active return thunk: retbleed_return_thunk Sep 11 00:16:22.874063 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:16:22.874075 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:16:22.874084 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:16:22.874092 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:16:22.874103 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:16:22.874113 kernel: active return thunk: srso_return_thunk Sep 11 00:16:22.874127 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:16:22.874137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:16:22.874148 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:16:22.874156 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:16:22.874164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:16:22.874172 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:16:22.874180 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:16:22.874188 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:16:22.874195 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:16:22.874220 kernel: landlock: Up and running. Sep 11 00:16:22.874227 kernel: SELinux: Initializing. Sep 11 00:16:22.874236 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:16:22.874244 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:16:22.874252 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:16:22.874260 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:16:22.874268 kernel: ... version: 0 Sep 11 00:16:22.874276 kernel: ... bit width: 48 Sep 11 00:16:22.874284 kernel: ... generic registers: 6 Sep 11 00:16:22.874295 kernel: ... value mask: 0000ffffffffffff Sep 11 00:16:22.874303 kernel: ... max period: 00007fffffffffff Sep 11 00:16:22.874313 kernel: ... fixed-purpose events: 0 Sep 11 00:16:22.874329 kernel: ... event mask: 000000000000003f Sep 11 00:16:22.874339 kernel: signal: max sigframe size: 1776 Sep 11 00:16:22.874349 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:16:22.874359 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:16:22.874374 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:16:22.874385 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:16:22.874400 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:16:22.874410 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:16:22.874420 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:16:22.874431 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 11 00:16:22.874442 kernel: Memory: 2422672K/2565800K available (14336K kernel code, 2429K rwdata, 9960K rodata, 54036K init, 2932K bss, 137196K reserved, 0K cma-reserved) Sep 11 00:16:22.874451 kernel: devtmpfs: initialized Sep 11 00:16:22.874459 kernel: x86/mm: Memory block size: 128MB Sep 11 00:16:22.874467 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 11 00:16:22.874475 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 11 00:16:22.874486 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 11 00:16:22.874495 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 11 00:16:22.874502 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 11 00:16:22.874510 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 11 00:16:22.874518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:16:22.874526 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:16:22.874534 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:16:22.874543 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:16:22.874551 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:16:22.874571 kernel: audit: type=2000 audit(1757549779.231:1): state=initialized audit_enabled=0 res=1 Sep 11 00:16:22.874579 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:16:22.874588 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:16:22.874595 kernel: cpuidle: using governor menu Sep 11 00:16:22.874603 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:16:22.874611 kernel: dca service started, version 1.12.1 Sep 11 00:16:22.874619 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 11 00:16:22.874627 kernel: PCI: Using configuration type 1 for base access Sep 11 00:16:22.874638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:16:22.874646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:16:22.874654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:16:22.874662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:16:22.874670 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:16:22.874678 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:16:22.874685 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:16:22.874693 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:16:22.874701 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:16:22.874712 kernel: ACPI: Interpreter enabled Sep 11 00:16:22.874720 kernel: ACPI: PM: (supports S0 S3 S5) Sep 11 00:16:22.874727 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:16:22.874736 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:16:22.874744 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:16:22.874751 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:16:22.874760 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:16:22.875015 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:16:22.875235 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:16:22.875414 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:16:22.875428 kernel: PCI host bridge to bus 0000:00 Sep 11 00:16:22.875704 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:16:22.875868 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:16:22.876025 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:16:22.876188 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 11 00:16:22.876380 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 11 00:16:22.878106 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:16:22.878303 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:16:22.878526 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:16:22.878739 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:16:22.878914 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 11 00:16:22.879100 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 11 00:16:22.879318 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 11 00:16:22.879493 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:16:22.879707 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:16:22.879871 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 11 00:16:22.880004 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 11 00:16:22.880176 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 11 00:16:22.880350 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:16:22.880483 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 11 00:16:22.880626 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 11 00:16:22.880784 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 11 00:16:22.880978 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:16:22.881157 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 11 00:16:22.881415 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 11 00:16:22.881611 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 11 00:16:22.881786 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 11 00:16:22.881981 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:16:22.882155 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:16:22.882387 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:16:22.882571 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 11 00:16:22.882745 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 11 00:16:22.882984 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:16:22.883161 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 11 00:16:22.883180 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:16:22.883192 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:16:22.883225 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:16:22.883237 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:16:22.883249 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:16:22.883260 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:16:22.883278 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:16:22.883289 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:16:22.883300 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:16:22.883311 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:16:22.883323 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:16:22.883333 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:16:22.883345 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:16:22.883357 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:16:22.883368 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:16:22.883383 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:16:22.883395 kernel: iommu: Default domain type: Translated Sep 11 00:16:22.883407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:16:22.883419 kernel: efivars: Registered efivars operations Sep 11 00:16:22.883430 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:16:22.883442 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:16:22.883454 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 11 00:16:22.883465 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 11 00:16:22.883477 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 11 00:16:22.883491 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 11 00:16:22.883502 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 11 00:16:22.883513 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 11 00:16:22.883525 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 11 00:16:22.883537 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 11 00:16:22.883720 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:16:22.883895 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:16:22.884066 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:16:22.884089 kernel: vgaarb: loaded Sep 11 00:16:22.884101 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:16:22.884111 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:16:22.884122 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:16:22.884134 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:16:22.884145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:16:22.884157 kernel: pnp: PnP ACPI init Sep 11 00:16:22.884394 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 11 00:16:22.884422 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:16:22.884434 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:16:22.884446 kernel: NET: Registered PF_INET protocol family Sep 11 00:16:22.884458 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:16:22.884470 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:16:22.884483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:16:22.884495 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:16:22.884506 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:16:22.884518 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:16:22.884534 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:16:22.884546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:16:22.884569 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:16:22.884581 kernel: NET: Registered PF_XDP protocol family Sep 11 00:16:22.884755 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 11 00:16:22.884928 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 11 00:16:22.885090 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:16:22.885287 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:16:22.885452 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:16:22.885623 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 11 00:16:22.885787 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 11 00:16:22.885945 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:16:22.885964 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:16:22.885977 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 11 00:16:22.885989 kernel: Initialise system trusted keyrings Sep 11 00:16:22.886006 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:16:22.886018 kernel: Key type asymmetric registered Sep 11 00:16:22.886030 kernel: Asymmetric key parser 'x509' registered Sep 11 00:16:22.886041 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:16:22.886054 kernel: io scheduler mq-deadline registered Sep 11 00:16:22.886065 kernel: io scheduler kyber registered Sep 11 00:16:22.886077 kernel: io scheduler bfq registered Sep 11 00:16:22.886092 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:16:22.886105 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:16:22.886117 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:16:22.886129 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:16:22.886141 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:16:22.886153 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:16:22.886165 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:16:22.886178 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:16:22.886189 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:16:22.886401 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:16:22.886422 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:16:22.886594 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:16:22.886760 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:16:22 UTC (1757549782) Sep 11 00:16:22.886924 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:16:22.886943 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:16:22.886955 kernel: efifb: probing for efifb Sep 11 00:16:22.886967 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 11 00:16:22.886984 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 11 00:16:22.886996 kernel: efifb: scrolling: redraw Sep 11 00:16:22.887008 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:16:22.887020 kernel: Console: switching to colour frame buffer device 160x50 Sep 11 00:16:22.887032 kernel: fb0: EFI VGA frame buffer device Sep 11 00:16:22.887044 kernel: pstore: Using crash dump compression: deflate Sep 11 00:16:22.887056 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:16:22.887067 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:16:22.887079 kernel: Segment Routing with IPv6 Sep 11 00:16:22.887095 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:16:22.887107 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:16:22.887118 kernel: Key type dns_resolver registered Sep 11 00:16:22.887130 kernel: IPI shorthand broadcast: enabled Sep 11 00:16:22.887141 kernel: sched_clock: Marking stable (3564003160, 173413364)->(3774092209, -36675685) Sep 11 00:16:22.887153 kernel: registered taskstats version 1 Sep 11 00:16:22.887165 kernel: Loading compiled-in X.509 certificates Sep 11 00:16:22.887176 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 941433bdd955e1c3aa4064827516bddd510466ee' Sep 11 00:16:22.887188 kernel: Demotion targets for Node 0: null Sep 11 00:16:22.887237 kernel: Key type .fscrypt registered Sep 11 00:16:22.887251 kernel: Key type fscrypt-provisioning registered Sep 11 00:16:22.887262 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:16:22.887274 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:16:22.887286 kernel: ima: No architecture policies found Sep 11 00:16:22.887298 kernel: clk: Disabling unused clocks Sep 11 00:16:22.887310 kernel: Warning: unable to open an initial console. Sep 11 00:16:22.887322 kernel: Freeing unused kernel image (initmem) memory: 54036K Sep 11 00:16:22.887334 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:16:22.887350 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:16:22.887362 kernel: Run /init as init process Sep 11 00:16:22.887373 kernel: with arguments: Sep 11 00:16:22.887385 kernel: /init Sep 11 00:16:22.887397 kernel: with environment: Sep 11 00:16:22.887409 kernel: HOME=/ Sep 11 00:16:22.887421 kernel: TERM=linux Sep 11 00:16:22.887433 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:16:22.887446 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:16:22.887466 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:16:22.887479 systemd[1]: Detected virtualization kvm. Sep 11 00:16:22.887492 systemd[1]: Detected architecture x86-64. Sep 11 00:16:22.887504 systemd[1]: Running in initrd. Sep 11 00:16:22.887516 systemd[1]: No hostname configured, using default hostname. Sep 11 00:16:22.887528 systemd[1]: Hostname set to . Sep 11 00:16:22.887540 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:16:22.887557 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:16:22.887581 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:16:22.887594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:16:22.887607 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:16:22.887620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:16:22.887633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:16:22.887646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:16:22.887665 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:16:22.887678 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:16:22.887691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:16:22.887703 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:16:22.887715 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:16:22.887728 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:16:22.887740 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:16:22.887753 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:16:22.887769 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:16:22.887782 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:16:22.887794 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:16:22.887807 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:16:22.887819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:16:22.887832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:16:22.887845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:16:22.887857 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:16:22.887870 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:16:22.887886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:16:22.887902 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:16:22.887915 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:16:22.887928 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:16:22.887940 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:16:22.887953 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:16:22.887966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:16:22.887978 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:16:22.887998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:16:22.888011 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:16:22.888026 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:16:22.888039 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:16:22.888055 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:16:22.888102 systemd-journald[220]: Collecting audit messages is disabled. Sep 11 00:16:22.888135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:16:22.888149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:16:22.888166 systemd-journald[220]: Journal started Sep 11 00:16:22.888197 systemd-journald[220]: Runtime Journal (/run/log/journal/8ce332ea1f8747abbf732acaa4a2fec7) is 6M, max 48.4M, 42.4M free. Sep 11 00:16:22.873908 systemd-modules-load[221]: Inserted module 'overlay' Sep 11 00:16:22.891074 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:16:22.904239 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:16:22.906799 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 11 00:16:22.907889 kernel: Bridge firewalling registered Sep 11 00:16:22.907659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:16:22.908362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:16:22.913137 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:16:22.916363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:16:22.928769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:16:22.929147 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:16:22.934963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:16:22.937497 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:16:22.937849 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:16:22.943652 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:16:22.971218 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=20820f07706ad5590d38fe5324b9055d59a89dc1109fdc449cad1a53209b9dbd Sep 11 00:16:22.990859 systemd-resolved[259]: Positive Trust Anchors: Sep 11 00:16:22.990878 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:16:22.990908 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:16:22.994149 systemd-resolved[259]: Defaulting to hostname 'linux'. Sep 11 00:16:22.996801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:16:23.001076 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:16:23.113282 kernel: SCSI subsystem initialized Sep 11 00:16:23.124256 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:16:23.136252 kernel: iscsi: registered transport (tcp) Sep 11 00:16:23.163471 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:16:23.163565 kernel: QLogic iSCSI HBA Driver Sep 11 00:16:23.187779 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:16:23.211960 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:16:23.215770 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:16:23.280499 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:16:23.285252 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:16:23.350268 kernel: raid6: avx2x4 gen() 29184 MB/s Sep 11 00:16:23.367260 kernel: raid6: avx2x2 gen() 27001 MB/s Sep 11 00:16:23.384527 kernel: raid6: avx2x1 gen() 22792 MB/s Sep 11 00:16:23.384635 kernel: raid6: using algorithm avx2x4 gen() 29184 MB/s Sep 11 00:16:23.402355 kernel: raid6: .... xor() 6982 MB/s, rmw enabled Sep 11 00:16:23.402476 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:16:23.429269 kernel: xor: automatically using best checksumming function avx Sep 11 00:16:23.599247 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:16:23.607325 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:16:23.611072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:16:23.643354 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 11 00:16:23.649240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:16:23.654451 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:16:23.686162 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Sep 11 00:16:23.717083 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:16:23.719572 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:16:23.816114 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:16:23.819107 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:16:23.852237 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:16:23.855327 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:16:23.858352 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:16:23.858377 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:16:23.858395 kernel: GPT:9289727 != 19775487 Sep 11 00:16:23.859655 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:16:23.859677 kernel: GPT:9289727 != 19775487 Sep 11 00:16:23.860842 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:16:23.860862 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:16:23.874290 kernel: AES CTR mode by8 optimization enabled Sep 11 00:16:23.877232 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:16:23.901234 kernel: libata version 3.00 loaded. Sep 11 00:16:23.909281 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:16:23.915271 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:16:23.918293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:16:23.919672 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:16:23.919842 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:16:23.919984 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:16:23.918691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:16:23.923484 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:16:23.928258 kernel: scsi host0: ahci Sep 11 00:16:23.928440 kernel: scsi host1: ahci Sep 11 00:16:23.928605 kernel: scsi host2: ahci Sep 11 00:16:23.928779 kernel: scsi host3: ahci Sep 11 00:16:23.928935 kernel: scsi host4: ahci Sep 11 00:16:23.926125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:16:23.939953 kernel: scsi host5: ahci Sep 11 00:16:23.940300 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 11 00:16:23.940316 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 11 00:16:23.940340 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 11 00:16:23.940351 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 11 00:16:23.940361 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 11 00:16:23.940372 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 11 00:16:23.931966 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:16:23.964195 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:16:23.964724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:16:23.986673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:16:24.008477 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:16:24.017000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:16:24.018389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:16:24.019773 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:16:24.049653 disk-uuid[636]: Primary Header is updated. Sep 11 00:16:24.049653 disk-uuid[636]: Secondary Entries is updated. Sep 11 00:16:24.049653 disk-uuid[636]: Secondary Header is updated. Sep 11 00:16:24.053470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:16:24.058230 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:16:24.251311 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:16:24.251412 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:16:24.252243 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:16:24.253235 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:16:24.254237 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:16:24.255230 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:16:24.256253 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:16:24.257741 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:16:24.257770 kernel: ata3.00: applying bridge limits Sep 11 00:16:24.259662 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:16:24.259691 kernel: ata3.00: configured for UDMA/100 Sep 11 00:16:24.260274 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:16:24.323413 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:16:24.323757 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:16:24.345245 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:16:24.774389 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:16:24.775004 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:16:24.778667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:16:24.778992 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:16:24.784285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:16:24.817286 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:16:25.061518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:16:25.061588 disk-uuid[637]: The operation has completed successfully. Sep 11 00:16:25.092545 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:16:25.092705 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:16:25.136968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:16:25.169130 sh[666]: Success Sep 11 00:16:25.188678 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:16:25.188761 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:16:25.189908 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:16:25.201239 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:16:25.231649 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:16:25.234006 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:16:25.253970 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:16:25.259240 kernel: BTRFS: device fsid 1d23f222-37c7-4ff5-813e-235ce83bed46 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (678) Sep 11 00:16:25.261302 kernel: BTRFS info (device dm-0): first mount of filesystem 1d23f222-37c7-4ff5-813e-235ce83bed46 Sep 11 00:16:25.261332 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:16:25.266515 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:16:25.266541 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:16:25.268549 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:16:25.269237 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:16:25.271318 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:16:25.272574 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:16:25.275913 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:16:25.311767 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 11 00:16:25.311836 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:16:25.311849 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:16:25.316248 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:16:25.316288 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:16:25.321232 kernel: BTRFS info (device vda6): last unmount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:16:25.322719 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:16:25.325596 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:16:25.441744 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:16:25.445485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:16:25.459166 ignition[757]: Ignition 2.21.0 Sep 11 00:16:25.459212 ignition[757]: Stage: fetch-offline Sep 11 00:16:25.459263 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:16:25.459277 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:16:25.459388 ignition[757]: parsed url from cmdline: "" Sep 11 00:16:25.459393 ignition[757]: no config URL provided Sep 11 00:16:25.459398 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:16:25.459408 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:16:25.459434 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 11 00:16:25.459439 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:16:25.470132 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 11 00:16:25.497177 systemd-networkd[853]: lo: Link UP Sep 11 00:16:25.497189 systemd-networkd[853]: lo: Gained carrier Sep 11 00:16:25.499980 systemd-networkd[853]: Enumeration completed Sep 11 00:16:25.500321 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:16:25.500677 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:16:25.500681 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:16:25.503291 systemd[1]: Reached target network.target - Network. Sep 11 00:16:25.503677 systemd-networkd[853]: eth0: Link UP Sep 11 00:16:25.503845 systemd-networkd[853]: eth0: Gained carrier Sep 11 00:16:25.503854 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:16:25.523610 ignition[757]: parsing config with SHA512: 09c34c8f5216b4050191edef23e2f7d70b1095b058851293faa4e0dac36e65259f85a84cf873e21cdbde0c5bcd28f08931ef6bde0f70a74fab3f97db57a0b195 Sep 11 00:16:25.526322 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:16:25.529021 unknown[757]: fetched base config from "system" Sep 11 00:16:25.529038 unknown[757]: fetched user config from "qemu" Sep 11 00:16:25.529519 ignition[757]: fetch-offline: fetch-offline passed Sep 11 00:16:25.529595 ignition[757]: Ignition finished successfully Sep 11 00:16:25.534041 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:16:25.534371 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:16:25.538169 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:16:25.592728 ignition[861]: Ignition 2.21.0 Sep 11 00:16:25.592744 ignition[861]: Stage: kargs Sep 11 00:16:25.592903 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:16:25.592914 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:16:25.598344 ignition[861]: kargs: kargs passed Sep 11 00:16:25.598457 ignition[861]: Ignition finished successfully Sep 11 00:16:25.604148 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:16:25.607462 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:16:25.654029 ignition[869]: Ignition 2.21.0 Sep 11 00:16:25.654505 ignition[869]: Stage: disks Sep 11 00:16:25.654798 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:16:25.654816 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:16:25.656155 ignition[869]: disks: disks passed Sep 11 00:16:25.656229 ignition[869]: Ignition finished successfully Sep 11 00:16:25.659559 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:16:25.661023 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:16:25.663060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:16:25.665299 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:16:25.667669 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:16:25.668692 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:16:25.671896 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:16:25.704431 systemd-resolved[259]: Detected conflict on linux IN A 10.0.0.70 Sep 11 00:16:25.704452 systemd-resolved[259]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 11 00:16:25.707452 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:16:26.018556 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:16:26.022638 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:16:26.171227 kernel: EXT4-fs (vda9): mounted filesystem 8ebc908f-0860-41e2-beed-287b778bd592 r/w with ordered data mode. Quota mode: none. Sep 11 00:16:26.171744 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:16:26.173496 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:16:26.175419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:16:26.177753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:16:26.179975 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:16:26.180021 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:16:26.180060 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:16:26.211873 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 11 00:16:26.211905 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:16:26.211923 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:16:26.186543 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:16:26.215489 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:16:26.215521 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:16:26.204012 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:16:26.218163 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:16:26.252122 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:16:26.258278 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:16:26.263612 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:16:26.268390 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:16:26.368659 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:16:26.371299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:16:26.373221 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:16:26.395829 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:16:26.397451 kernel: BTRFS info (device vda6): last unmount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:16:26.413400 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:16:26.431375 ignition[1001]: INFO : Ignition 2.21.0 Sep 11 00:16:26.431375 ignition[1001]: INFO : Stage: mount Sep 11 00:16:26.450721 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:16:26.450721 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:16:26.450721 ignition[1001]: INFO : mount: mount passed Sep 11 00:16:26.450721 ignition[1001]: INFO : Ignition finished successfully Sep 11 00:16:26.454178 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:16:26.456809 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:16:26.483668 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:16:26.497225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 11 00:16:26.499387 kernel: BTRFS info (device vda6): first mount of filesystem dfd585e5-5346-4151-8d09-25f0fad7f81c Sep 11 00:16:26.499412 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:16:26.502299 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:16:26.502328 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:16:26.504033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:16:26.534286 ignition[1032]: INFO : Ignition 2.21.0 Sep 11 00:16:26.534286 ignition[1032]: INFO : Stage: files Sep 11 00:16:26.534286 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:16:26.534286 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:16:26.538708 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:16:26.538708 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:16:26.538708 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:16:26.543810 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:16:26.545411 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:16:26.546821 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:16:26.545873 unknown[1032]: wrote ssh authorized keys file for user: core Sep 11 00:16:26.549479 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:16:26.549479 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 11 00:16:26.617918 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:16:26.845698 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:16:26.845698 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:16:26.850020 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:16:26.851706 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:16:26.853610 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:16:26.853606 systemd-networkd[853]: eth0: Gained IPv6LL Sep 11 00:16:26.856376 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:16:26.856376 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:16:26.856376 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:16:26.856376 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:16:27.083054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:16:27.086226 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:16:27.088377 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:16:27.294839 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:16:27.294839 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:16:27.300728 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 11 00:16:27.765044 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:16:28.503392 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:16:28.503392 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:16:28.507352 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:16:28.626378 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:16:28.626378 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:16:28.626378 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 00:16:28.630816 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:16:28.630816 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:16:28.630816 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 00:16:28.630816 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:16:28.652806 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:16:28.659087 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:16:28.660931 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:16:28.660931 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:16:28.664095 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:16:28.664095 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:16:28.664095 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:16:28.664095 ignition[1032]: INFO : files: files passed Sep 11 00:16:28.664095 ignition[1032]: INFO : Ignition finished successfully Sep 11 00:16:28.669054 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:16:28.672729 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:16:28.675665 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:16:28.700087 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:16:28.700249 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:16:28.704527 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:16:28.706246 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:16:28.709576 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:16:28.709576 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:16:28.706853 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:16:28.710156 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:16:28.713707 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:16:28.803169 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:16:28.803394 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:16:28.806143 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:16:28.808321 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:16:28.808658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:16:28.813482 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:16:28.850875 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:16:28.853089 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:16:28.884876 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:16:28.886544 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:16:28.889622 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:16:28.891712 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:16:28.891912 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:16:28.896159 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:16:28.896414 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:16:28.898493 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:16:28.900548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:16:28.900948 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:16:28.901552 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:16:28.901931 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:16:28.902344 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:16:28.902903 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:16:28.903288 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:16:28.903850 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:16:28.904189 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:16:28.904351 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:16:28.924377 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:16:28.926357 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:16:28.929007 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:16:28.930347 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:16:28.931628 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:16:28.931813 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:16:28.937454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:16:28.937640 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:16:28.938832 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:16:28.940847 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:16:28.946302 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:16:28.947757 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:16:28.950516 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:16:28.951483 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:16:28.951631 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:16:28.954120 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:16:28.954252 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:16:28.956033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:16:28.956268 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:16:28.958435 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:16:28.958556 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:16:28.960996 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:16:28.964381 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:16:28.967024 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:16:28.967186 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:16:28.968238 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:16:28.968345 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:16:28.978479 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:16:28.996612 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:16:29.024378 ignition[1087]: INFO : Ignition 2.21.0 Sep 11 00:16:29.024378 ignition[1087]: INFO : Stage: umount Sep 11 00:16:29.028243 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:16:29.028243 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:16:29.030943 ignition[1087]: INFO : umount: umount passed Sep 11 00:16:29.030943 ignition[1087]: INFO : Ignition finished successfully Sep 11 00:16:29.029350 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:16:29.033550 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:16:29.033750 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:16:29.035691 systemd[1]: Stopped target network.target - Network. Sep 11 00:16:29.036820 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:16:29.036904 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:16:29.041945 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:16:29.042065 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:16:29.046939 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:16:29.047033 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:16:29.049293 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:16:29.049344 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:16:29.052229 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:16:29.053717 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:16:29.071700 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:16:29.071917 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:16:29.077993 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:16:29.078431 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:16:29.079116 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:16:29.084145 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:16:29.085130 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:16:29.086090 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:16:29.086173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:16:29.090126 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:16:29.093888 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:16:29.095091 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:16:29.096633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:16:29.096688 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:16:29.100144 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:16:29.100216 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:16:29.102331 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:16:29.102386 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:16:29.105692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:16:29.107941 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:16:29.108011 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:16:29.124146 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:16:29.131478 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:16:29.131967 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:16:29.132017 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:16:29.135763 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:16:29.135806 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:16:29.136882 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:16:29.136940 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:16:29.137839 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:16:29.137917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:16:29.143909 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:16:29.143998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:16:29.151064 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:16:29.153528 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:16:29.153635 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:16:29.158441 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:16:29.158529 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:16:29.162349 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 00:16:29.162459 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:16:29.166483 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:16:29.166550 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:16:29.168746 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:16:29.168811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:16:29.173944 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:16:29.174091 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 11 00:16:29.174245 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:16:29.174356 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:16:29.174804 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:16:29.174958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:16:29.184921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:16:29.185083 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:16:29.219802 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:16:29.219965 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:16:29.222323 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:16:29.224304 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:16:29.224397 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:16:29.227132 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:16:29.261162 systemd[1]: Switching root. Sep 11 00:16:29.312150 systemd-journald[220]: Journal stopped Sep 11 00:16:30.712849 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 11 00:16:30.712923 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:16:30.712937 kernel: SELinux: policy capability open_perms=1 Sep 11 00:16:30.712949 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:16:30.712963 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:16:30.712975 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:16:30.712991 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:16:30.713003 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:16:30.713014 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:16:30.713026 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:16:30.713038 kernel: audit: type=1403 audit(1757549789.852:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:16:30.713067 systemd[1]: Successfully loaded SELinux policy in 68.895ms. Sep 11 00:16:30.713091 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.891ms. Sep 11 00:16:30.713107 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:16:30.713120 systemd[1]: Detected virtualization kvm. Sep 11 00:16:30.713132 systemd[1]: Detected architecture x86-64. Sep 11 00:16:30.713144 systemd[1]: Detected first boot. Sep 11 00:16:30.713156 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:16:30.713169 zram_generator::config[1131]: No configuration found. Sep 11 00:16:30.713182 kernel: Guest personality initialized and is inactive Sep 11 00:16:30.713196 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:16:30.713228 kernel: Initialized host personality Sep 11 00:16:30.713239 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:16:30.713251 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:16:30.713264 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:16:30.713276 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:16:30.713289 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:16:30.713301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:16:30.713313 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:16:30.713325 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:16:30.713351 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:16:30.713365 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:16:30.713382 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:16:30.713395 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:16:30.713408 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:16:30.713420 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:16:30.713432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:16:30.713445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:16:30.713457 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:16:30.713474 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:16:30.713487 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:16:30.713501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:16:30.713514 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:16:30.713526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:16:30.713538 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:16:30.713551 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:16:30.713568 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:16:30.713580 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:16:30.713592 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:16:30.713604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:16:30.713617 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:16:30.713629 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:16:30.713641 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:16:30.713654 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:16:30.713666 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:16:30.713683 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:16:30.713698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:16:30.713714 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:16:30.713730 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:16:30.713746 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:16:30.713762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:16:30.713779 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:16:30.713797 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:16:30.713814 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:30.713839 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:16:30.713864 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:16:30.713882 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:16:30.713900 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:16:30.713921 systemd[1]: Reached target machines.target - Containers. Sep 11 00:16:30.713935 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:16:30.713949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:16:30.713966 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:16:30.713983 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:16:30.714016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:16:30.714035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:16:30.714052 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:16:30.714129 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:16:30.714148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:16:30.714166 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:16:30.714182 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:16:30.714225 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:16:30.714251 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:16:30.714268 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:16:30.714289 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:16:30.714305 kernel: loop: module loaded Sep 11 00:16:30.714320 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:16:30.714335 kernel: fuse: init (API version 7.41) Sep 11 00:16:30.714362 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:16:30.714388 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:16:30.714404 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:16:30.714419 kernel: ACPI: bus type drm_connector registered Sep 11 00:16:30.714435 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:16:30.714451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:16:30.714467 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:16:30.714483 systemd[1]: Stopped verity-setup.service. Sep 11 00:16:30.714538 systemd-journald[1195]: Collecting audit messages is disabled. Sep 11 00:16:30.714577 systemd-journald[1195]: Journal started Sep 11 00:16:30.714616 systemd-journald[1195]: Runtime Journal (/run/log/journal/8ce332ea1f8747abbf732acaa4a2fec7) is 6M, max 48.4M, 42.4M free. Sep 11 00:16:30.441235 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:16:30.464315 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:16:30.464797 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:16:30.742229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:30.747393 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:16:30.749162 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:16:30.750503 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:16:30.752028 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:16:30.753280 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:16:30.754620 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:16:30.756024 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:16:30.757533 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:16:30.759279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:16:30.760951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:16:30.761262 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:16:30.762789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:16:30.763020 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:16:30.764570 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:16:30.764807 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:16:30.766361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:16:30.766610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:16:30.768374 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:16:30.768609 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:16:30.770146 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:16:30.770618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:16:30.772216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:16:30.773954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:16:30.775756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:16:30.777469 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:16:30.793227 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:16:30.795960 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:16:30.798503 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:16:30.799839 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:16:30.799882 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:16:30.802305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:16:30.809031 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:16:30.810353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:16:30.812129 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:16:30.816323 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:16:30.817787 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:16:30.818980 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:16:30.820388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:16:30.822671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:16:30.825980 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:16:30.838106 systemd-journald[1195]: Time spent on flushing to /var/log/journal/8ce332ea1f8747abbf732acaa4a2fec7 is 33.309ms for 1072 entries. Sep 11 00:16:30.838106 systemd-journald[1195]: System Journal (/var/log/journal/8ce332ea1f8747abbf732acaa4a2fec7) is 8M, max 195.6M, 187.6M free. Sep 11 00:16:30.897471 systemd-journald[1195]: Received client request to flush runtime journal. Sep 11 00:16:30.897543 kernel: loop0: detected capacity change from 0 to 224512 Sep 11 00:16:30.897577 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:16:30.830337 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:16:30.834318 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:16:30.835908 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:16:30.847954 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:16:30.849434 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:16:30.852495 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:16:30.864366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:16:30.885837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:16:30.891171 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 11 00:16:30.891184 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 11 00:16:30.898522 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:16:30.900769 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:16:30.906833 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:16:30.918251 kernel: loop1: detected capacity change from 0 to 111000 Sep 11 00:16:30.921474 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:16:30.954837 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:16:30.957224 kernel: loop2: detected capacity change from 0 to 128016 Sep 11 00:16:30.958894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:16:30.985227 kernel: loop3: detected capacity change from 0 to 224512 Sep 11 00:16:30.994047 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 11 00:16:30.994072 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 11 00:16:31.000126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:16:31.005240 kernel: loop4: detected capacity change from 0 to 111000 Sep 11 00:16:31.014317 kernel: loop5: detected capacity change from 0 to 128016 Sep 11 00:16:31.024043 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:16:31.024823 (sd-merge)[1274]: Merged extensions into '/usr'. Sep 11 00:16:31.032347 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:16:31.032548 systemd[1]: Reloading... Sep 11 00:16:31.133258 zram_generator::config[1302]: No configuration found. Sep 11 00:16:31.352869 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:16:31.378042 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:16:31.378455 systemd[1]: Reloading finished in 345 ms. Sep 11 00:16:31.412022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:16:31.430501 systemd[1]: Starting ensure-sysext.service... Sep 11 00:16:31.442821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:16:31.464337 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:16:31.464353 systemd[1]: Reloading... Sep 11 00:16:31.522992 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:16:31.523053 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:16:31.523470 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:16:31.523792 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:16:31.525071 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:16:31.526162 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 11 00:16:31.526417 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 11 00:16:31.534247 zram_generator::config[1367]: No configuration found. Sep 11 00:16:31.534737 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:16:31.534760 systemd-tmpfiles[1339]: Skipping /boot Sep 11 00:16:31.561825 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:16:31.561848 systemd-tmpfiles[1339]: Skipping /boot Sep 11 00:16:31.740490 systemd[1]: Reloading finished in 275 ms. Sep 11 00:16:31.765262 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:16:31.793951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:16:31.804876 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:16:31.807894 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:16:31.823759 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:16:31.827894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:16:31.831610 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:16:31.834230 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:16:31.840640 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:31.840998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:16:31.845749 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:16:31.849571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:16:31.852138 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:16:31.853489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:16:31.853615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:16:31.858078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:16:31.859535 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:31.862539 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:16:31.864612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:16:31.864925 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:16:31.866876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:16:31.867733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:16:31.869960 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:16:31.870257 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:16:31.884102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:16:31.894699 systemd-udevd[1425]: Using default interface naming scheme 'v255'. Sep 11 00:16:31.938365 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:31.938574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:16:31.940173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:16:31.942912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:16:31.945954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:16:31.947338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:16:31.947465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:16:31.955686 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:16:31.967888 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:16:31.969248 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:31.971284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:16:31.971523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:16:31.973550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:16:31.973814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:16:31.975899 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:16:31.976167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:16:31.984771 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:16:31.991516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:31.991928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:16:31.995519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:16:32.003157 augenrules[1450]: No rules Sep 11 00:16:32.004615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:16:32.012503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:16:32.018588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:16:32.019939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:16:32.020054 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:16:32.020197 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:16:32.021141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:16:32.023134 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:16:32.023450 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:16:32.025070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:16:32.025352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:16:32.030430 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:16:32.032702 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:16:32.034803 systemd[1]: Finished ensure-sysext.service. Sep 11 00:16:32.036303 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:16:32.038193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:16:32.038462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:16:32.041856 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:16:32.042867 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:16:32.047843 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:16:32.064963 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:16:32.066135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:16:32.066214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:16:32.068118 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:16:32.069323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:16:32.133697 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:16:32.199251 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:16:32.204221 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:16:32.225670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:16:32.229247 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:16:32.229636 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:16:32.254450 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:16:32.258111 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 11 00:16:32.258466 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:16:32.258638 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:16:32.290899 systemd-networkd[1496]: lo: Link UP Sep 11 00:16:32.290914 systemd-networkd[1496]: lo: Gained carrier Sep 11 00:16:32.292881 systemd-networkd[1496]: Enumeration completed Sep 11 00:16:32.292999 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:16:32.295980 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:16:32.298383 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:16:32.299765 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:16:32.299776 systemd-networkd[1496]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:16:32.300410 systemd-networkd[1496]: eth0: Link UP Sep 11 00:16:32.300591 systemd-networkd[1496]: eth0: Gained carrier Sep 11 00:16:32.300606 systemd-networkd[1496]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:16:32.313470 systemd-networkd[1496]: eth0: DHCPv4 address 10.0.0.70/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:16:32.329773 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:16:32.348331 systemd-resolved[1409]: Positive Trust Anchors: Sep 11 00:16:32.348355 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:16:32.348386 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:16:32.356531 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 11 00:16:32.358345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:16:32.360101 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:16:32.363159 systemd[1]: Reached target network.target - Network. Sep 11 00:16:32.366421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:16:32.383167 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:16:32.387436 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:16:32.387489 systemd-timesyncd[1497]: Initial clock synchronization to Thu 2025-09-11 00:16:32.653854 UTC. Sep 11 00:16:32.388873 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:16:32.428246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:16:32.428578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:16:32.440677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:16:32.476794 kernel: kvm_amd: TSC scaling supported Sep 11 00:16:32.476861 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:16:32.476875 kernel: kvm_amd: Nested Paging enabled Sep 11 00:16:32.476887 kernel: kvm_amd: LBR virtualization supported Sep 11 00:16:32.476900 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:16:32.478227 kernel: kvm_amd: Virtual GIF supported Sep 11 00:16:32.517272 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:16:32.541710 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:16:32.543307 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:16:32.544498 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:16:32.545740 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:16:32.547024 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:16:32.548574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:16:32.549858 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:16:32.551108 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:16:32.552380 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:16:32.552425 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:16:32.553495 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:16:32.555668 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:16:32.559138 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:16:32.563548 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:16:32.565041 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:16:32.566471 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:16:32.570981 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:16:32.572740 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:16:32.574627 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:16:32.576594 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:16:32.577583 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:16:32.578570 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:16:32.578605 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:16:32.579767 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:16:32.581951 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:16:32.583970 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:16:32.586347 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:16:32.590388 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:16:32.591516 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:16:32.601427 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:16:32.602481 jq[1548]: false Sep 11 00:16:32.604322 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:16:32.610415 extend-filesystems[1549]: Found /dev/vda6 Sep 11 00:16:32.612556 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:16:32.613784 extend-filesystems[1549]: Found /dev/vda9 Sep 11 00:16:32.618488 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:16:32.619723 extend-filesystems[1549]: Checking size of /dev/vda9 Sep 11 00:16:32.621997 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing passwd entry cache Sep 11 00:16:32.622630 oslogin_cache_refresh[1550]: Refreshing passwd entry cache Sep 11 00:16:32.622778 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:16:32.629411 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:16:32.631675 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:16:32.633533 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting users, quitting Sep 11 00:16:32.633587 oslogin_cache_refresh[1550]: Failure getting users, quitting Sep 11 00:16:32.634079 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:16:32.634079 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Refreshing group entry cache Sep 11 00:16:32.633611 oslogin_cache_refresh[1550]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:16:32.633679 oslogin_cache_refresh[1550]: Refreshing group entry cache Sep 11 00:16:32.635068 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:16:32.635908 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:16:32.639743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:16:32.641275 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Failure getting groups, quitting Sep 11 00:16:32.641275 google_oslogin_nss_cache[1550]: oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:16:32.639935 oslogin_cache_refresh[1550]: Failure getting groups, quitting Sep 11 00:16:32.639952 oslogin_cache_refresh[1550]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:16:32.644854 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:16:32.646695 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:16:32.646959 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:16:32.647323 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:16:32.648379 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:16:32.649865 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:16:32.651264 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:16:32.654101 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:16:32.655050 extend-filesystems[1549]: Resized partition /dev/vda9 Sep 11 00:16:32.655443 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:16:32.663228 update_engine[1568]: I20250911 00:16:32.662045 1568 main.cc:92] Flatcar Update Engine starting Sep 11 00:16:32.672652 extend-filesystems[1576]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:16:32.678685 jq[1569]: true Sep 11 00:16:32.692873 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:16:32.701918 jq[1587]: true Sep 11 00:16:32.737547 dbus-daemon[1546]: [system] SELinux support is enabled Sep 11 00:16:32.738159 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:16:32.741784 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:16:32.741816 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:16:32.798240 update_engine[1568]: I20250911 00:16:32.745060 1568 update_check_scheduler.cc:74] Next update check in 6m26s Sep 11 00:16:32.743125 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:16:32.743140 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:16:32.799813 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:16:32.800096 systemd-logind[1564]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:16:32.800121 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:16:32.800617 systemd-logind[1564]: New seat seat0. Sep 11 00:16:32.807441 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:16:32.809227 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:16:32.812858 tar[1574]: linux-amd64/LICENSE Sep 11 00:16:32.813321 tar[1574]: linux-amd64/helm Sep 11 00:16:32.850251 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:16:32.940194 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:16:33.026298 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:16:33.058581 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:16:33.058581 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:16:33.058581 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:16:33.063941 extend-filesystems[1549]: Resized filesystem in /dev/vda9 Sep 11 00:16:33.065373 bash[1606]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:16:33.063336 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:16:33.065753 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:16:33.063809 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:16:33.070373 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:16:33.074934 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:16:33.108132 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:16:33.119563 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:16:33.147375 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:16:33.147972 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:16:33.152755 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:16:33.205311 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:16:33.210816 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:16:33.216640 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:16:33.234847 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:16:33.281672 containerd[1588]: time="2025-09-11T00:16:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:16:33.282710 containerd[1588]: time="2025-09-11T00:16:33.282666882Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 11 00:16:33.300025 containerd[1588]: time="2025-09-11T00:16:33.299713469Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.161µs" Sep 11 00:16:33.300025 containerd[1588]: time="2025-09-11T00:16:33.299784839Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:16:33.300025 containerd[1588]: time="2025-09-11T00:16:33.299825184Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:16:33.300216 containerd[1588]: time="2025-09-11T00:16:33.300134093Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:16:33.300216 containerd[1588]: time="2025-09-11T00:16:33.300168464Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:16:33.300273 containerd[1588]: time="2025-09-11T00:16:33.300247723Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:16:33.300671 containerd[1588]: time="2025-09-11T00:16:33.300462207Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:16:33.300718 containerd[1588]: time="2025-09-11T00:16:33.300694643Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301127 containerd[1588]: time="2025-09-11T00:16:33.301077511Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301127 containerd[1588]: time="2025-09-11T00:16:33.301105028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301127 containerd[1588]: time="2025-09-11T00:16:33.301117576Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301127 containerd[1588]: time="2025-09-11T00:16:33.301125827Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301267 containerd[1588]: time="2025-09-11T00:16:33.301254198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301583 containerd[1588]: time="2025-09-11T00:16:33.301544536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301621 containerd[1588]: time="2025-09-11T00:16:33.301583481Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:16:33.301621 containerd[1588]: time="2025-09-11T00:16:33.301593772Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:16:33.301670 containerd[1588]: time="2025-09-11T00:16:33.301633453Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:16:33.301979 containerd[1588]: time="2025-09-11T00:16:33.301932849Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:16:33.302042 containerd[1588]: time="2025-09-11T00:16:33.302021612Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309587444Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309665337Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309687139Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309702379Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309724181Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309739968Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309761171Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309774608Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309785458Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309795541Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309804972Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:16:33.309808 containerd[1588]: time="2025-09-11T00:16:33.309818234Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310029882Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310056653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310083746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310098727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310109596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310120881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310132424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310142290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310152829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310162767Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:16:33.310259 containerd[1588]: time="2025-09-11T00:16:33.310173658Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:16:33.310579 containerd[1588]: time="2025-09-11T00:16:33.310311864Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:16:33.310579 containerd[1588]: time="2025-09-11T00:16:33.310331689Z" level=info msg="Start snapshots syncer" Sep 11 00:16:33.310579 containerd[1588]: time="2025-09-11T00:16:33.310360283Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:16:33.310735 containerd[1588]: time="2025-09-11T00:16:33.310679381Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:16:33.310735 containerd[1588]: time="2025-09-11T00:16:33.310736547Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:16:33.312346 containerd[1588]: time="2025-09-11T00:16:33.312293985Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:16:33.312460 containerd[1588]: time="2025-09-11T00:16:33.312433165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:16:33.312512 containerd[1588]: time="2025-09-11T00:16:33.312476252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:16:33.312512 containerd[1588]: time="2025-09-11T00:16:33.312492900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:16:33.312593 containerd[1588]: time="2025-09-11T00:16:33.312516296Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:16:33.312593 containerd[1588]: time="2025-09-11T00:16:33.312542456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:16:33.312593 containerd[1588]: time="2025-09-11T00:16:33.312563348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:16:33.312593 containerd[1588]: time="2025-09-11T00:16:33.312581331Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:16:33.312707 containerd[1588]: time="2025-09-11T00:16:33.312612874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:16:33.312707 containerd[1588]: time="2025-09-11T00:16:33.312627948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:16:33.312707 containerd[1588]: time="2025-09-11T00:16:33.312661904Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:16:33.312791 containerd[1588]: time="2025-09-11T00:16:33.312713305Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:16:33.312791 containerd[1588]: time="2025-09-11T00:16:33.312738399Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:16:33.312791 containerd[1588]: time="2025-09-11T00:16:33.312749135Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:16:33.312791 containerd[1588]: time="2025-09-11T00:16:33.312758649Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:16:33.312791 containerd[1588]: time="2025-09-11T00:16:33.312766465Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:16:33.312791 containerd[1588]: time="2025-09-11T00:16:33.312775379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:16:33.312791 containerd[1588]: time="2025-09-11T00:16:33.312785421Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:16:33.312991 containerd[1588]: time="2025-09-11T00:16:33.312808496Z" level=info msg="runtime interface created" Sep 11 00:16:33.312991 containerd[1588]: time="2025-09-11T00:16:33.312814729Z" level=info msg="created NRI interface" Sep 11 00:16:33.312991 containerd[1588]: time="2025-09-11T00:16:33.312822648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:16:33.312991 containerd[1588]: time="2025-09-11T00:16:33.312832763Z" level=info msg="Connect containerd service" Sep 11 00:16:33.312991 containerd[1588]: time="2025-09-11T00:16:33.312870094Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:16:33.313835 containerd[1588]: time="2025-09-11T00:16:33.313789339Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:16:33.380514 systemd-networkd[1496]: eth0: Gained IPv6LL Sep 11 00:16:33.384346 tar[1574]: linux-amd64/README.md Sep 11 00:16:33.384752 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:16:33.389007 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:16:33.394184 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:16:33.402805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:33.406446 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:16:33.408770 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:16:33.449869 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:16:33.455650 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:16:33.455963 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:16:33.458555 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:16:33.541272 containerd[1588]: time="2025-09-11T00:16:33.541129293Z" level=info msg="Start subscribing containerd event" Sep 11 00:16:33.541272 containerd[1588]: time="2025-09-11T00:16:33.541223905Z" level=info msg="Start recovering state" Sep 11 00:16:33.541423 containerd[1588]: time="2025-09-11T00:16:33.541386523Z" level=info msg="Start event monitor" Sep 11 00:16:33.541423 containerd[1588]: time="2025-09-11T00:16:33.541407569Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:16:33.541423 containerd[1588]: time="2025-09-11T00:16:33.541417601Z" level=info msg="Start streaming server" Sep 11 00:16:33.541527 containerd[1588]: time="2025-09-11T00:16:33.541429713Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:16:33.541527 containerd[1588]: time="2025-09-11T00:16:33.541438078Z" level=info msg="runtime interface starting up..." Sep 11 00:16:33.541527 containerd[1588]: time="2025-09-11T00:16:33.541444528Z" level=info msg="starting plugins..." Sep 11 00:16:33.541527 containerd[1588]: time="2025-09-11T00:16:33.541462189Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:16:33.541975 containerd[1588]: time="2025-09-11T00:16:33.541935229Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:16:33.542130 containerd[1588]: time="2025-09-11T00:16:33.542097257Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:16:33.542309 containerd[1588]: time="2025-09-11T00:16:33.542200482Z" level=info msg="containerd successfully booted in 0.261352s" Sep 11 00:16:33.542365 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:16:34.890460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:34.892209 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:16:34.910236 systemd[1]: Startup finished in 3.632s (kernel) + 7.192s (initrd) + 5.125s (userspace) = 15.950s. Sep 11 00:16:34.915658 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:16:35.510367 kubelet[1680]: E0911 00:16:35.510297 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:16:35.515468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:16:35.515716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:16:35.516168 systemd[1]: kubelet.service: Consumed 1.756s CPU time, 265.6M memory peak. Sep 11 00:16:36.325320 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:16:36.327099 systemd[1]: Started sshd@0-10.0.0.70:22-10.0.0.1:42330.service - OpenSSH per-connection server daemon (10.0.0.1:42330). Sep 11 00:16:36.486470 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 42330 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:16:36.488819 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:36.503737 systemd-logind[1564]: New session 1 of user core. Sep 11 00:16:36.505540 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:16:36.507270 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:16:36.546526 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:16:36.549349 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:16:36.586062 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:16:36.589152 systemd-logind[1564]: New session c1 of user core. Sep 11 00:16:36.775299 systemd[1698]: Queued start job for default target default.target. Sep 11 00:16:36.799830 systemd[1698]: Created slice app.slice - User Application Slice. Sep 11 00:16:36.799862 systemd[1698]: Reached target paths.target - Paths. Sep 11 00:16:36.799907 systemd[1698]: Reached target timers.target - Timers. Sep 11 00:16:36.801685 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:16:36.816202 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:16:36.816418 systemd[1698]: Reached target sockets.target - Sockets. Sep 11 00:16:36.816483 systemd[1698]: Reached target basic.target - Basic System. Sep 11 00:16:36.816552 systemd[1698]: Reached target default.target - Main User Target. Sep 11 00:16:36.816597 systemd[1698]: Startup finished in 218ms. Sep 11 00:16:36.818080 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:16:36.825454 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:16:36.896858 systemd[1]: Started sshd@1-10.0.0.70:22-10.0.0.1:42338.service - OpenSSH per-connection server daemon (10.0.0.1:42338). Sep 11 00:16:36.966871 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 42338 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:16:36.968908 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:36.974060 systemd-logind[1564]: New session 2 of user core. Sep 11 00:16:36.987539 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:16:37.046251 sshd[1712]: Connection closed by 10.0.0.1 port 42338 Sep 11 00:16:37.046829 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:37.056560 systemd[1]: sshd@1-10.0.0.70:22-10.0.0.1:42338.service: Deactivated successfully. Sep 11 00:16:37.059643 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:16:37.060711 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:16:37.065342 systemd[1]: Started sshd@2-10.0.0.70:22-10.0.0.1:42340.service - OpenSSH per-connection server daemon (10.0.0.1:42340). Sep 11 00:16:37.066115 systemd-logind[1564]: Removed session 2. Sep 11 00:16:37.123687 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 42340 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:16:37.125815 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:37.132296 systemd-logind[1564]: New session 3 of user core. Sep 11 00:16:37.149530 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:16:37.203454 sshd[1721]: Connection closed by 10.0.0.1 port 42340 Sep 11 00:16:37.204020 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:37.215306 systemd[1]: sshd@2-10.0.0.70:22-10.0.0.1:42340.service: Deactivated successfully. Sep 11 00:16:37.217740 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:16:37.218900 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:16:37.222311 systemd[1]: Started sshd@3-10.0.0.70:22-10.0.0.1:42356.service - OpenSSH per-connection server daemon (10.0.0.1:42356). Sep 11 00:16:37.223080 systemd-logind[1564]: Removed session 3. Sep 11 00:16:37.280135 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 42356 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:16:37.281679 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:37.286318 systemd-logind[1564]: New session 4 of user core. Sep 11 00:16:37.305372 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:16:37.361788 sshd[1730]: Connection closed by 10.0.0.1 port 42356 Sep 11 00:16:37.362178 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:37.372069 systemd[1]: sshd@3-10.0.0.70:22-10.0.0.1:42356.service: Deactivated successfully. Sep 11 00:16:37.374053 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:16:37.374874 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:16:37.377737 systemd[1]: Started sshd@4-10.0.0.70:22-10.0.0.1:42366.service - OpenSSH per-connection server daemon (10.0.0.1:42366). Sep 11 00:16:37.378510 systemd-logind[1564]: Removed session 4. Sep 11 00:16:37.439268 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 42366 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:16:37.441368 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:37.447149 systemd-logind[1564]: New session 5 of user core. Sep 11 00:16:37.456514 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:16:37.521241 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:16:37.521596 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:37.547757 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:37.550038 sshd[1739]: Connection closed by 10.0.0.1 port 42366 Sep 11 00:16:37.550488 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:37.575584 systemd[1]: sshd@4-10.0.0.70:22-10.0.0.1:42366.service: Deactivated successfully. Sep 11 00:16:37.577676 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:16:37.578709 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:16:37.582024 systemd[1]: Started sshd@5-10.0.0.70:22-10.0.0.1:42374.service - OpenSSH per-connection server daemon (10.0.0.1:42374). Sep 11 00:16:37.582835 systemd-logind[1564]: Removed session 5. Sep 11 00:16:37.636414 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 42374 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:16:37.638354 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:37.644103 systemd-logind[1564]: New session 6 of user core. Sep 11 00:16:37.659625 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:16:37.719558 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:16:37.719962 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:37.727286 sudo[1751]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:37.734005 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:16:37.734345 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:37.744569 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:16:37.793766 augenrules[1773]: No rules Sep 11 00:16:37.795587 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:16:37.795893 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:16:37.797310 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 11 00:16:37.798962 sshd[1749]: Connection closed by 10.0.0.1 port 42374 Sep 11 00:16:37.799356 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 11 00:16:37.812988 systemd[1]: sshd@5-10.0.0.70:22-10.0.0.1:42374.service: Deactivated successfully. Sep 11 00:16:37.814973 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:16:37.815907 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:16:37.818730 systemd[1]: Started sshd@6-10.0.0.70:22-10.0.0.1:42390.service - OpenSSH per-connection server daemon (10.0.0.1:42390). Sep 11 00:16:37.819542 systemd-logind[1564]: Removed session 6. Sep 11 00:16:37.871256 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 42390 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:16:37.872706 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:16:37.877492 systemd-logind[1564]: New session 7 of user core. Sep 11 00:16:37.890369 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:16:37.945376 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:16:37.945713 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:16:38.713307 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:16:38.731985 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:16:39.310489 dockerd[1806]: time="2025-09-11T00:16:39.310355526Z" level=info msg="Starting up" Sep 11 00:16:39.311466 dockerd[1806]: time="2025-09-11T00:16:39.311414604Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:16:39.367500 dockerd[1806]: time="2025-09-11T00:16:39.367435228Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 11 00:16:40.120311 dockerd[1806]: time="2025-09-11T00:16:40.120194828Z" level=info msg="Loading containers: start." Sep 11 00:16:40.132239 kernel: Initializing XFRM netlink socket Sep 11 00:16:40.487117 systemd-networkd[1496]: docker0: Link UP Sep 11 00:16:40.495604 dockerd[1806]: time="2025-09-11T00:16:40.495444110Z" level=info msg="Loading containers: done." Sep 11 00:16:40.514708 dockerd[1806]: time="2025-09-11T00:16:40.514616331Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:16:40.514914 dockerd[1806]: time="2025-09-11T00:16:40.514791193Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 11 00:16:40.514964 dockerd[1806]: time="2025-09-11T00:16:40.514929962Z" level=info msg="Initializing buildkit" Sep 11 00:16:40.556138 dockerd[1806]: time="2025-09-11T00:16:40.556050309Z" level=info msg="Completed buildkit initialization" Sep 11 00:16:40.563898 dockerd[1806]: time="2025-09-11T00:16:40.563805613Z" level=info msg="Daemon has completed initialization" Sep 11 00:16:40.564084 dockerd[1806]: time="2025-09-11T00:16:40.563948361Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:16:40.564235 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:16:41.568598 containerd[1588]: time="2025-09-11T00:16:41.568538144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 11 00:16:42.146038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235366943.mount: Deactivated successfully. Sep 11 00:16:43.584279 containerd[1588]: time="2025-09-11T00:16:43.584148917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:43.586692 containerd[1588]: time="2025-09-11T00:16:43.586643507Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 11 00:16:43.588732 containerd[1588]: time="2025-09-11T00:16:43.588681250Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:43.592088 containerd[1588]: time="2025-09-11T00:16:43.592024351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:43.593653 containerd[1588]: time="2025-09-11T00:16:43.593588256Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.024987567s" Sep 11 00:16:43.593653 containerd[1588]: time="2025-09-11T00:16:43.593638779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 11 00:16:43.594698 containerd[1588]: time="2025-09-11T00:16:43.594656705Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 11 00:16:45.741846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:16:45.744163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:46.155443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:46.161012 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:16:46.295610 kubelet[2095]: E0911 00:16:46.295521 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:16:46.302350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:16:46.302559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:16:46.303050 systemd[1]: kubelet.service: Consumed 387ms CPU time, 113.4M memory peak. Sep 11 00:16:46.506018 containerd[1588]: time="2025-09-11T00:16:46.505897368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:46.525446 containerd[1588]: time="2025-09-11T00:16:46.525347688Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 11 00:16:46.536508 containerd[1588]: time="2025-09-11T00:16:46.536415288Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:46.547816 containerd[1588]: time="2025-09-11T00:16:46.547686152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:46.550218 containerd[1588]: time="2025-09-11T00:16:46.549577307Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.954886373s" Sep 11 00:16:46.550218 containerd[1588]: time="2025-09-11T00:16:46.549646641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 11 00:16:46.550942 containerd[1588]: time="2025-09-11T00:16:46.550865912Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 11 00:16:48.571894 containerd[1588]: time="2025-09-11T00:16:48.571798254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:48.573559 containerd[1588]: time="2025-09-11T00:16:48.573487748Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 11 00:16:48.575037 containerd[1588]: time="2025-09-11T00:16:48.574993388Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:48.578578 containerd[1588]: time="2025-09-11T00:16:48.578479589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:48.579709 containerd[1588]: time="2025-09-11T00:16:48.579640069Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.028720346s" Sep 11 00:16:48.579709 containerd[1588]: time="2025-09-11T00:16:48.579691445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 11 00:16:48.580562 containerd[1588]: time="2025-09-11T00:16:48.580490180Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 11 00:16:51.748375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995208875.mount: Deactivated successfully. Sep 11 00:16:53.978807 containerd[1588]: time="2025-09-11T00:16:53.978708607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:53.981601 containerd[1588]: time="2025-09-11T00:16:53.981469373Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 11 00:16:53.983412 containerd[1588]: time="2025-09-11T00:16:53.983327354Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:53.986725 containerd[1588]: time="2025-09-11T00:16:53.986626307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:53.987229 containerd[1588]: time="2025-09-11T00:16:53.987136696Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 5.406606177s" Sep 11 00:16:53.987229 containerd[1588]: time="2025-09-11T00:16:53.987185821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 11 00:16:53.988102 containerd[1588]: time="2025-09-11T00:16:53.988057959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:16:54.844636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545647739.mount: Deactivated successfully. Sep 11 00:16:56.491804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:16:56.493519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:16:56.733276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:16:56.756997 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:16:57.502337 kubelet[2136]: E0911 00:16:57.502246 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:16:57.506946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:16:57.507194 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:16:57.507882 systemd[1]: kubelet.service: Consumed 398ms CPU time, 110.9M memory peak. Sep 11 00:16:59.453007 containerd[1588]: time="2025-09-11T00:16:59.452802016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:59.454692 containerd[1588]: time="2025-09-11T00:16:59.454592672Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:16:59.457344 containerd[1588]: time="2025-09-11T00:16:59.457257440Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:59.463678 containerd[1588]: time="2025-09-11T00:16:59.463581205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:16:59.464737 containerd[1588]: time="2025-09-11T00:16:59.464685135Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 5.476580863s" Sep 11 00:16:59.464737 containerd[1588]: time="2025-09-11T00:16:59.464722514Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:16:59.465651 containerd[1588]: time="2025-09-11T00:16:59.465399773Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:17:01.688712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1152030681.mount: Deactivated successfully. Sep 11 00:17:01.952497 containerd[1588]: time="2025-09-11T00:17:01.952300673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:17:01.994112 containerd[1588]: time="2025-09-11T00:17:01.993979139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:17:02.039978 containerd[1588]: time="2025-09-11T00:17:02.039874394Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:17:02.169235 containerd[1588]: time="2025-09-11T00:17:02.169111684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:17:02.169879 containerd[1588]: time="2025-09-11T00:17:02.169829868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.704391336s" Sep 11 00:17:02.169964 containerd[1588]: time="2025-09-11T00:17:02.169879646Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:17:02.170485 containerd[1588]: time="2025-09-11T00:17:02.170432083Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 11 00:17:06.062034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464678585.mount: Deactivated successfully. Sep 11 00:17:07.741837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:17:07.744074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:17:07.981766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:17:07.987023 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:17:08.033775 kubelet[2248]: E0911 00:17:08.033566 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:17:08.037629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:17:08.037824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:17:08.038570 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.8M memory peak. Sep 11 00:17:10.178557 containerd[1588]: time="2025-09-11T00:17:10.178455597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:10.179419 containerd[1588]: time="2025-09-11T00:17:10.179388588Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 11 00:17:10.181194 containerd[1588]: time="2025-09-11T00:17:10.181150448Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:10.184449 containerd[1588]: time="2025-09-11T00:17:10.184374481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:10.185496 containerd[1588]: time="2025-09-11T00:17:10.185448685Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 8.014972319s" Sep 11 00:17:10.185496 containerd[1588]: time="2025-09-11T00:17:10.185491265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 11 00:17:13.061454 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:17:13.061656 systemd[1]: kubelet.service: Consumed 237ms CPU time, 110.8M memory peak. Sep 11 00:17:13.064147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:17:13.095045 systemd[1]: Reload requested from client PID 2289 ('systemctl') (unit session-7.scope)... Sep 11 00:17:13.095065 systemd[1]: Reloading... Sep 11 00:17:13.174310 zram_generator::config[2329]: No configuration found. Sep 11 00:17:13.675854 systemd[1]: Reloading finished in 580 ms. Sep 11 00:17:13.748896 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:17:13.749010 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:17:13.749337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:17:13.749390 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.2M memory peak. Sep 11 00:17:13.751083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:17:13.967283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:17:13.996768 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:17:14.049066 kubelet[2381]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:17:14.049066 kubelet[2381]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:17:14.049066 kubelet[2381]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:17:14.049535 kubelet[2381]: I0911 00:17:14.049153 2381 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:17:14.643146 kubelet[2381]: I0911 00:17:14.643070 2381 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:17:14.643146 kubelet[2381]: I0911 00:17:14.643113 2381 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:17:14.643524 kubelet[2381]: I0911 00:17:14.643487 2381 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:17:14.801591 kubelet[2381]: E0911 00:17:14.801526 2381 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:14.802987 kubelet[2381]: I0911 00:17:14.802642 2381 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:17:14.824076 kubelet[2381]: I0911 00:17:14.824033 2381 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:17:14.829743 kubelet[2381]: I0911 00:17:14.829665 2381 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:17:14.830056 kubelet[2381]: I0911 00:17:14.829900 2381 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:17:14.830140 kubelet[2381]: I0911 00:17:14.829946 2381 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:17:14.884149 kubelet[2381]: I0911 00:17:14.884048 2381 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:17:14.884149 kubelet[2381]: I0911 00:17:14.884082 2381 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:17:14.884419 kubelet[2381]: I0911 00:17:14.884281 2381 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:17:14.887621 kubelet[2381]: I0911 00:17:14.887587 2381 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:17:14.887621 kubelet[2381]: I0911 00:17:14.887618 2381 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:17:14.887691 kubelet[2381]: I0911 00:17:14.887645 2381 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:17:14.887691 kubelet[2381]: I0911 00:17:14.887656 2381 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:17:14.890263 kubelet[2381]: W0911 00:17:14.890110 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:14.890263 kubelet[2381]: W0911 00:17:14.890171 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:14.890263 kubelet[2381]: E0911 00:17:14.890266 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:14.890263 kubelet[2381]: E0911 00:17:14.890227 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:14.894871 kubelet[2381]: I0911 00:17:14.894147 2381 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 00:17:14.894871 kubelet[2381]: I0911 00:17:14.894571 2381 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:17:14.894871 kubelet[2381]: W0911 00:17:14.894632 2381 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:17:14.897224 kubelet[2381]: I0911 00:17:14.897189 2381 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:17:14.897290 kubelet[2381]: I0911 00:17:14.897240 2381 server.go:1287] "Started kubelet" Sep 11 00:17:14.897329 kubelet[2381]: I0911 00:17:14.897296 2381 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:17:14.898221 kubelet[2381]: I0911 00:17:14.898173 2381 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:17:14.898775 kubelet[2381]: I0911 00:17:14.898751 2381 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:17:14.898996 kubelet[2381]: I0911 00:17:14.898947 2381 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:17:14.899161 kubelet[2381]: I0911 00:17:14.899144 2381 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:17:14.899439 kubelet[2381]: I0911 00:17:14.899418 2381 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:17:14.903517 kubelet[2381]: E0911 00:17:14.903295 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:14.903517 kubelet[2381]: I0911 00:17:14.903375 2381 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:17:14.903639 kubelet[2381]: I0911 00:17:14.903543 2381 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:17:14.903639 kubelet[2381]: I0911 00:17:14.903595 2381 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:17:14.904013 kubelet[2381]: W0911 00:17:14.903966 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:14.904088 kubelet[2381]: E0911 00:17:14.904023 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:14.905343 kubelet[2381]: E0911 00:17:14.904534 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="200ms" Sep 11 00:17:14.905343 kubelet[2381]: I0911 00:17:14.905014 2381 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:17:14.905713 kubelet[2381]: E0911 00:17:14.905690 2381 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:17:14.906699 kubelet[2381]: I0911 00:17:14.906666 2381 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:17:14.906699 kubelet[2381]: I0911 00:17:14.906685 2381 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:17:14.918098 kubelet[2381]: E0911 00:17:14.916836 2381 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864124350184e02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:17:14.897214978 +0000 UTC m=+0.893677231,LastTimestamp:2025-09-11 00:17:14.897214978 +0000 UTC m=+0.893677231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:17:14.924340 kubelet[2381]: I0911 00:17:14.924277 2381 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:17:14.926406 kubelet[2381]: I0911 00:17:14.926085 2381 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:17:14.926406 kubelet[2381]: I0911 00:17:14.926111 2381 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:17:14.926406 kubelet[2381]: I0911 00:17:14.926139 2381 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:17:14.926406 kubelet[2381]: I0911 00:17:14.926149 2381 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:17:14.926406 kubelet[2381]: E0911 00:17:14.926218 2381 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:17:14.927323 kubelet[2381]: W0911 00:17:14.927261 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:14.927484 kubelet[2381]: E0911 00:17:14.927458 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:14.929837 kubelet[2381]: I0911 00:17:14.929361 2381 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:17:14.929837 kubelet[2381]: I0911 00:17:14.929497 2381 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:17:14.929837 kubelet[2381]: I0911 00:17:14.929517 2381 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:17:15.003708 kubelet[2381]: E0911 00:17:15.003672 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.026642 kubelet[2381]: E0911 00:17:15.026611 2381 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:17:15.103854 kubelet[2381]: E0911 00:17:15.103809 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.105342 kubelet[2381]: E0911 00:17:15.105302 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="400ms" Sep 11 00:17:15.204750 kubelet[2381]: E0911 00:17:15.204575 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.226857 kubelet[2381]: E0911 00:17:15.226770 2381 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:17:15.305560 kubelet[2381]: E0911 00:17:15.305476 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.406665 kubelet[2381]: E0911 00:17:15.406575 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.506736 kubelet[2381]: E0911 00:17:15.506669 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="800ms" Sep 11 00:17:15.506736 kubelet[2381]: E0911 00:17:15.506688 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.607472 kubelet[2381]: E0911 00:17:15.607378 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.627721 kubelet[2381]: E0911 00:17:15.627610 2381 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:17:15.706845 kubelet[2381]: W0911 00:17:15.706743 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:15.707006 kubelet[2381]: E0911 00:17:15.706845 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:15.707758 kubelet[2381]: E0911 00:17:15.707679 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.808778 kubelet[2381]: E0911 00:17:15.808566 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:15.821405 kubelet[2381]: W0911 00:17:15.821336 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:15.821405 kubelet[2381]: E0911 00:17:15.821394 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:15.909571 kubelet[2381]: E0911 00:17:15.909461 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:16.010648 kubelet[2381]: E0911 00:17:16.010485 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:16.111374 kubelet[2381]: E0911 00:17:16.111160 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:16.194127 kubelet[2381]: W0911 00:17:16.194025 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:16.194127 kubelet[2381]: E0911 00:17:16.194095 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:16.211821 kubelet[2381]: E0911 00:17:16.211741 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:16.308332 kubelet[2381]: E0911 00:17:16.308248 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="1.6s" Sep 11 00:17:16.312333 kubelet[2381]: E0911 00:17:16.312288 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:16.391064 kubelet[2381]: I0911 00:17:16.390875 2381 policy_none.go:49] "None policy: Start" Sep 11 00:17:16.391064 kubelet[2381]: I0911 00:17:16.390942 2381 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:17:16.391064 kubelet[2381]: I0911 00:17:16.390964 2381 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:17:16.410415 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:17:16.412557 kubelet[2381]: E0911 00:17:16.412491 2381 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:16.424303 kubelet[2381]: W0911 00:17:16.424151 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:16.424303 kubelet[2381]: E0911 00:17:16.424242 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:16.426565 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:17:16.428179 kubelet[2381]: E0911 00:17:16.428139 2381 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:17:16.430512 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:17:16.448934 kubelet[2381]: I0911 00:17:16.448819 2381 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:17:16.449227 kubelet[2381]: I0911 00:17:16.449168 2381 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:17:16.449269 kubelet[2381]: I0911 00:17:16.449188 2381 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:17:16.449509 kubelet[2381]: I0911 00:17:16.449484 2381 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:17:16.450475 kubelet[2381]: E0911 00:17:16.450443 2381 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:17:16.450555 kubelet[2381]: E0911 00:17:16.450513 2381 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:17:16.551805 kubelet[2381]: I0911 00:17:16.551682 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:17:16.552480 kubelet[2381]: E0911 00:17:16.552399 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 11 00:17:16.755017 kubelet[2381]: I0911 00:17:16.754956 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:17:16.755593 kubelet[2381]: E0911 00:17:16.755542 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 11 00:17:16.971480 kubelet[2381]: E0911 00:17:16.971415 2381 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:17.157697 kubelet[2381]: I0911 00:17:17.157525 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:17:17.158263 kubelet[2381]: E0911 00:17:17.157964 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 11 00:17:17.530124 kubelet[2381]: W0911 00:17:17.530042 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:17.530124 kubelet[2381]: E0911 00:17:17.530101 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:17.582076 kubelet[2381]: W0911 00:17:17.581994 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:17.582076 kubelet[2381]: E0911 00:17:17.582053 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:17.909337 kubelet[2381]: E0911 00:17:17.909135 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="3.2s" Sep 11 00:17:17.959846 kubelet[2381]: I0911 00:17:17.959798 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:17:17.960326 kubelet[2381]: E0911 00:17:17.960284 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 11 00:17:17.961374 update_engine[1568]: I20250911 00:17:17.961282 1568 update_attempter.cc:509] Updating boot flags... Sep 11 00:17:18.104529 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 11 00:17:18.122590 kubelet[2381]: I0911 00:17:18.122530 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:18.122590 kubelet[2381]: I0911 00:17:18.122587 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97bf372134c3c5d28d4c32e8e2823ef2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97bf372134c3c5d28d4c32e8e2823ef2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:18.122774 kubelet[2381]: I0911 00:17:18.122615 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:18.122774 kubelet[2381]: I0911 00:17:18.122641 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:18.122774 kubelet[2381]: I0911 00:17:18.122674 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97bf372134c3c5d28d4c32e8e2823ef2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97bf372134c3c5d28d4c32e8e2823ef2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:18.122774 kubelet[2381]: I0911 00:17:18.122716 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97bf372134c3c5d28d4c32e8e2823ef2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97bf372134c3c5d28d4c32e8e2823ef2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:18.122926 kubelet[2381]: I0911 00:17:18.122801 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:18.122926 kubelet[2381]: I0911 00:17:18.122840 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:18.122926 kubelet[2381]: I0911 00:17:18.122855 2381 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:18.202681 kubelet[2381]: E0911 00:17:18.202552 2381 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:17:18.226612 systemd[1]: Created slice kubepods-burstable-pod97bf372134c3c5d28d4c32e8e2823ef2.slice - libcontainer container kubepods-burstable-pod97bf372134c3c5d28d4c32e8e2823ef2.slice. Sep 11 00:17:18.247640 kubelet[2381]: E0911 00:17:18.247579 2381 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:17:18.247993 kubelet[2381]: E0911 00:17:18.247970 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:18.248752 containerd[1588]: time="2025-09-11T00:17:18.248700615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97bf372134c3c5d28d4c32e8e2823ef2,Namespace:kube-system,Attempt:0,}" Sep 11 00:17:18.251474 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 11 00:17:18.253793 kubelet[2381]: E0911 00:17:18.253758 2381 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:17:18.254133 kubelet[2381]: E0911 00:17:18.254113 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:18.254639 containerd[1588]: time="2025-09-11T00:17:18.254580009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 11 00:17:18.503110 kubelet[2381]: E0911 00:17:18.503083 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:18.503726 containerd[1588]: time="2025-09-11T00:17:18.503673969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 11 00:17:18.524464 kubelet[2381]: W0911 00:17:18.524430 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:18.524564 kubelet[2381]: E0911 00:17:18.524470 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:19.408176 kubelet[2381]: W0911 00:17:19.408129 2381 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.70:6443: connect: connection refused Sep 11 00:17:19.408176 kubelet[2381]: E0911 00:17:19.408184 2381 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:19.562465 kubelet[2381]: I0911 00:17:19.562404 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:17:19.562964 kubelet[2381]: E0911 00:17:19.562900 2381 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.70:6443/api/v1/nodes\": dial tcp 10.0.0.70:6443: connect: connection refused" node="localhost" Sep 11 00:17:20.440273 kubelet[2381]: E0911 00:17:20.440094 2381 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864124350184e02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:17:14.897214978 +0000 UTC m=+0.893677231,LastTimestamp:2025-09-11 00:17:14.897214978 +0000 UTC m=+0.893677231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:17:21.110229 kubelet[2381]: E0911 00:17:21.110108 2381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.70:6443: connect: connection refused" interval="6.4s" Sep 11 00:17:21.165871 containerd[1588]: time="2025-09-11T00:17:21.165373838Z" level=info msg="connecting to shim f488aaf9a3b8a31059cc988022e4de9001c9f500ccde82bf151844a7f9ef08ee" address="unix:///run/containerd/s/ac029a0b64280505a375ed8bf7ba2f0b0db7ab146f136239f03a8a7269fd478d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:21.199299 containerd[1588]: time="2025-09-11T00:17:21.196362917Z" level=info msg="connecting to shim 5502afa055c4203c5adfed2a22d5c644de53940ad93bbf901c45986042900fb9" address="unix:///run/containerd/s/f17da67b05ba0bd7bd4b16b63d114f096e321921fcfdc51bc69850711c1b8d75" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:21.212514 containerd[1588]: time="2025-09-11T00:17:21.212417478Z" level=info msg="connecting to shim 101aeab431e17a5f040d74efc301b7b8e8a2a77f2874846d85bbf7e2ec6d922b" address="unix:///run/containerd/s/e16995c4bc4b1f5db360077506cb8634599f3b363f8b521c29c2fa2073366882" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:21.249492 systemd[1]: Started cri-containerd-101aeab431e17a5f040d74efc301b7b8e8a2a77f2874846d85bbf7e2ec6d922b.scope - libcontainer container 101aeab431e17a5f040d74efc301b7b8e8a2a77f2874846d85bbf7e2ec6d922b. Sep 11 00:17:21.272492 systemd[1]: Started cri-containerd-5502afa055c4203c5adfed2a22d5c644de53940ad93bbf901c45986042900fb9.scope - libcontainer container 5502afa055c4203c5adfed2a22d5c644de53940ad93bbf901c45986042900fb9. Sep 11 00:17:21.280323 systemd[1]: Started cri-containerd-f488aaf9a3b8a31059cc988022e4de9001c9f500ccde82bf151844a7f9ef08ee.scope - libcontainer container f488aaf9a3b8a31059cc988022e4de9001c9f500ccde82bf151844a7f9ef08ee. Sep 11 00:17:21.349502 containerd[1588]: time="2025-09-11T00:17:21.349431440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"101aeab431e17a5f040d74efc301b7b8e8a2a77f2874846d85bbf7e2ec6d922b\"" Sep 11 00:17:21.355776 kubelet[2381]: E0911 00:17:21.355726 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:21.358305 kubelet[2381]: E0911 00:17:21.358187 2381 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.70:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:17:21.362946 containerd[1588]: time="2025-09-11T00:17:21.362823662Z" level=info msg="CreateContainer within sandbox \"101aeab431e17a5f040d74efc301b7b8e8a2a77f2874846d85bbf7e2ec6d922b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:17:21.369638 containerd[1588]: time="2025-09-11T00:17:21.369572712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"5502afa055c4203c5adfed2a22d5c644de53940ad93bbf901c45986042900fb9\"" Sep 11 00:17:21.370475 kubelet[2381]: E0911 00:17:21.370315 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:21.371894 containerd[1588]: time="2025-09-11T00:17:21.371862541Z" level=info msg="CreateContainer within sandbox \"5502afa055c4203c5adfed2a22d5c644de53940ad93bbf901c45986042900fb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:17:21.385562 containerd[1588]: time="2025-09-11T00:17:21.385498316Z" level=info msg="Container 6e01041505e80449cf860aa15933a4d45ff065bd1b1dae3375f80c9efdc452e5: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:21.405271 containerd[1588]: time="2025-09-11T00:17:21.405129804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:97bf372134c3c5d28d4c32e8e2823ef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f488aaf9a3b8a31059cc988022e4de9001c9f500ccde82bf151844a7f9ef08ee\"" Sep 11 00:17:21.406408 containerd[1588]: time="2025-09-11T00:17:21.406367097Z" level=info msg="CreateContainer within sandbox \"101aeab431e17a5f040d74efc301b7b8e8a2a77f2874846d85bbf7e2ec6d922b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e01041505e80449cf860aa15933a4d45ff065bd1b1dae3375f80c9efdc452e5\"" Sep 11 00:17:21.407059 kubelet[2381]: E0911 00:17:21.407001 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:21.407154 containerd[1588]: time="2025-09-11T00:17:21.407112916Z" level=info msg="StartContainer for \"6e01041505e80449cf860aa15933a4d45ff065bd1b1dae3375f80c9efdc452e5\"" Sep 11 00:17:21.408785 containerd[1588]: time="2025-09-11T00:17:21.408738341Z" level=info msg="connecting to shim 6e01041505e80449cf860aa15933a4d45ff065bd1b1dae3375f80c9efdc452e5" address="unix:///run/containerd/s/e16995c4bc4b1f5db360077506cb8634599f3b363f8b521c29c2fa2073366882" protocol=ttrpc version=3 Sep 11 00:17:21.408970 containerd[1588]: time="2025-09-11T00:17:21.408913316Z" level=info msg="CreateContainer within sandbox \"f488aaf9a3b8a31059cc988022e4de9001c9f500ccde82bf151844a7f9ef08ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:17:21.412828 containerd[1588]: time="2025-09-11T00:17:21.412768932Z" level=info msg="Container 4f8435d9e8595c560bcc8dae39f28ed86dd59651bfd9af1fd54e8b6dad0fa4b0: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:21.423400 containerd[1588]: time="2025-09-11T00:17:21.423336438Z" level=info msg="Container 48a7987a0c2185fea2df0affaa0788487b19f317a60e551123e552a7203ca878: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:21.429103 containerd[1588]: time="2025-09-11T00:17:21.429061202Z" level=info msg="CreateContainer within sandbox \"5502afa055c4203c5adfed2a22d5c644de53940ad93bbf901c45986042900fb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f8435d9e8595c560bcc8dae39f28ed86dd59651bfd9af1fd54e8b6dad0fa4b0\"" Sep 11 00:17:21.429686 containerd[1588]: time="2025-09-11T00:17:21.429646657Z" level=info msg="StartContainer for \"4f8435d9e8595c560bcc8dae39f28ed86dd59651bfd9af1fd54e8b6dad0fa4b0\"" Sep 11 00:17:21.431224 containerd[1588]: time="2025-09-11T00:17:21.431159651Z" level=info msg="connecting to shim 4f8435d9e8595c560bcc8dae39f28ed86dd59651bfd9af1fd54e8b6dad0fa4b0" address="unix:///run/containerd/s/f17da67b05ba0bd7bd4b16b63d114f096e321921fcfdc51bc69850711c1b8d75" protocol=ttrpc version=3 Sep 11 00:17:21.432427 systemd[1]: Started cri-containerd-6e01041505e80449cf860aa15933a4d45ff065bd1b1dae3375f80c9efdc452e5.scope - libcontainer container 6e01041505e80449cf860aa15933a4d45ff065bd1b1dae3375f80c9efdc452e5. Sep 11 00:17:21.435239 containerd[1588]: time="2025-09-11T00:17:21.434714803Z" level=info msg="CreateContainer within sandbox \"f488aaf9a3b8a31059cc988022e4de9001c9f500ccde82bf151844a7f9ef08ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"48a7987a0c2185fea2df0affaa0788487b19f317a60e551123e552a7203ca878\"" Sep 11 00:17:21.435239 containerd[1588]: time="2025-09-11T00:17:21.435233155Z" level=info msg="StartContainer for \"48a7987a0c2185fea2df0affaa0788487b19f317a60e551123e552a7203ca878\"" Sep 11 00:17:21.436304 containerd[1588]: time="2025-09-11T00:17:21.436274387Z" level=info msg="connecting to shim 48a7987a0c2185fea2df0affaa0788487b19f317a60e551123e552a7203ca878" address="unix:///run/containerd/s/ac029a0b64280505a375ed8bf7ba2f0b0db7ab146f136239f03a8a7269fd478d" protocol=ttrpc version=3 Sep 11 00:17:21.462552 systemd[1]: Started cri-containerd-4f8435d9e8595c560bcc8dae39f28ed86dd59651bfd9af1fd54e8b6dad0fa4b0.scope - libcontainer container 4f8435d9e8595c560bcc8dae39f28ed86dd59651bfd9af1fd54e8b6dad0fa4b0. Sep 11 00:17:21.486522 systemd[1]: Started cri-containerd-48a7987a0c2185fea2df0affaa0788487b19f317a60e551123e552a7203ca878.scope - libcontainer container 48a7987a0c2185fea2df0affaa0788487b19f317a60e551123e552a7203ca878. Sep 11 00:17:21.537666 containerd[1588]: time="2025-09-11T00:17:21.537617723Z" level=info msg="StartContainer for \"6e01041505e80449cf860aa15933a4d45ff065bd1b1dae3375f80c9efdc452e5\" returns successfully" Sep 11 00:17:21.558271 containerd[1588]: time="2025-09-11T00:17:21.558080615Z" level=info msg="StartContainer for \"4f8435d9e8595c560bcc8dae39f28ed86dd59651bfd9af1fd54e8b6dad0fa4b0\" returns successfully" Sep 11 00:17:21.578735 containerd[1588]: time="2025-09-11T00:17:21.578678726Z" level=info msg="StartContainer for \"48a7987a0c2185fea2df0affaa0788487b19f317a60e551123e552a7203ca878\" returns successfully" Sep 11 00:17:21.947771 kubelet[2381]: E0911 00:17:21.947718 2381 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:17:21.948222 kubelet[2381]: E0911 00:17:21.947873 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:21.950501 kubelet[2381]: E0911 00:17:21.950370 2381 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:17:21.950776 kubelet[2381]: E0911 00:17:21.950756 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:21.953756 kubelet[2381]: E0911 00:17:21.953572 2381 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:17:21.953868 kubelet[2381]: E0911 00:17:21.953853 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:22.765551 kubelet[2381]: I0911 00:17:22.765504 2381 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:17:22.894145 kubelet[2381]: I0911 00:17:22.894080 2381 apiserver.go:52] "Watching apiserver" Sep 11 00:17:22.904452 kubelet[2381]: I0911 00:17:22.904354 2381 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:17:22.934365 kubelet[2381]: I0911 00:17:22.934308 2381 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:17:22.934642 kubelet[2381]: E0911 00:17:22.934452 2381 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:17:22.956267 kubelet[2381]: I0911 00:17:22.956214 2381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:22.957253 kubelet[2381]: I0911 00:17:22.956887 2381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:22.968564 kubelet[2381]: E0911 00:17:22.968482 2381 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:22.968744 kubelet[2381]: E0911 00:17:22.968706 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:22.968852 kubelet[2381]: E0911 00:17:22.968825 2381 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:22.969660 kubelet[2381]: E0911 00:17:22.969474 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:23.004674 kubelet[2381]: I0911 00:17:23.004599 2381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:23.007484 kubelet[2381]: E0911 00:17:23.007445 2381 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:23.007484 kubelet[2381]: I0911 00:17:23.007486 2381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:23.009085 kubelet[2381]: E0911 00:17:23.009041 2381 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:23.009085 kubelet[2381]: I0911 00:17:23.009066 2381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:23.020419 kubelet[2381]: E0911 00:17:23.020275 2381 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:27.437934 kubelet[2381]: I0911 00:17:27.437867 2381 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:27.449231 kubelet[2381]: E0911 00:17:27.449161 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:27.951682 systemd[1]: Reload requested from client PID 2679 ('systemctl') (unit session-7.scope)... Sep 11 00:17:27.951704 systemd[1]: Reloading... Sep 11 00:17:27.966786 kubelet[2381]: E0911 00:17:27.966732 2381 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:28.076371 zram_generator::config[2723]: No configuration found. Sep 11 00:17:28.403842 systemd[1]: Reloading finished in 451 ms. Sep 11 00:17:28.439047 kubelet[2381]: I0911 00:17:28.438942 2381 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:17:28.439155 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:17:28.462290 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:17:28.462720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:17:28.462797 systemd[1]: kubelet.service: Consumed 1.402s CPU time, 132.8M memory peak. Sep 11 00:17:28.465404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:17:28.726553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:17:28.746821 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:17:28.803305 kubelet[2767]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:17:28.803305 kubelet[2767]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:17:28.803305 kubelet[2767]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:17:28.803845 kubelet[2767]: I0911 00:17:28.803336 2767 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:17:28.810827 kubelet[2767]: I0911 00:17:28.810776 2767 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:17:28.810827 kubelet[2767]: I0911 00:17:28.810808 2767 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:17:28.811154 kubelet[2767]: I0911 00:17:28.811125 2767 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:17:28.812617 kubelet[2767]: I0911 00:17:28.812582 2767 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:17:28.815252 kubelet[2767]: I0911 00:17:28.815047 2767 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:17:28.820791 kubelet[2767]: I0911 00:17:28.820763 2767 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:17:28.827049 kubelet[2767]: I0911 00:17:28.827020 2767 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:17:28.827383 kubelet[2767]: I0911 00:17:28.827338 2767 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:17:28.827615 kubelet[2767]: I0911 00:17:28.827376 2767 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:17:28.827773 kubelet[2767]: I0911 00:17:28.827630 2767 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:17:28.827773 kubelet[2767]: I0911 00:17:28.827644 2767 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:17:28.827773 kubelet[2767]: I0911 00:17:28.827713 2767 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:17:28.827935 kubelet[2767]: I0911 00:17:28.827916 2767 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:17:28.827981 kubelet[2767]: I0911 00:17:28.827945 2767 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:17:28.827981 kubelet[2767]: I0911 00:17:28.827979 2767 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:17:28.828060 kubelet[2767]: I0911 00:17:28.827994 2767 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:17:28.830233 kubelet[2767]: I0911 00:17:28.830186 2767 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 00:17:28.830661 kubelet[2767]: I0911 00:17:28.830642 2767 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:17:28.831339 kubelet[2767]: I0911 00:17:28.831319 2767 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:17:28.831401 kubelet[2767]: I0911 00:17:28.831360 2767 server.go:1287] "Started kubelet" Sep 11 00:17:28.832698 kubelet[2767]: I0911 00:17:28.832631 2767 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:17:28.832968 kubelet[2767]: I0911 00:17:28.832950 2767 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:17:28.833047 kubelet[2767]: I0911 00:17:28.833003 2767 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:17:28.834225 kubelet[2767]: I0911 00:17:28.834189 2767 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:17:28.834811 kubelet[2767]: I0911 00:17:28.834779 2767 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:17:28.834878 kubelet[2767]: I0911 00:17:28.834820 2767 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:17:28.838470 kubelet[2767]: E0911 00:17:28.838427 2767 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:17:28.839300 kubelet[2767]: I0911 00:17:28.839260 2767 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:17:28.839461 kubelet[2767]: I0911 00:17:28.839441 2767 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:17:28.839710 kubelet[2767]: I0911 00:17:28.839681 2767 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:17:28.842788 kubelet[2767]: I0911 00:17:28.842736 2767 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:17:28.843020 kubelet[2767]: I0911 00:17:28.842959 2767 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:17:28.847179 kubelet[2767]: E0911 00:17:28.847135 2767 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:17:28.847739 kubelet[2767]: I0911 00:17:28.847699 2767 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:17:28.853903 kubelet[2767]: I0911 00:17:28.853816 2767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:17:28.855428 kubelet[2767]: I0911 00:17:28.855398 2767 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:17:28.855498 kubelet[2767]: I0911 00:17:28.855441 2767 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:17:28.855498 kubelet[2767]: I0911 00:17:28.855466 2767 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:17:28.855498 kubelet[2767]: I0911 00:17:28.855473 2767 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:17:28.855603 kubelet[2767]: E0911 00:17:28.855544 2767 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:17:28.881598 kubelet[2767]: I0911 00:17:28.881558 2767 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:17:28.881598 kubelet[2767]: I0911 00:17:28.881584 2767 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:17:28.881598 kubelet[2767]: I0911 00:17:28.881606 2767 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:17:28.882129 kubelet[2767]: I0911 00:17:28.882101 2767 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:17:28.882129 kubelet[2767]: I0911 00:17:28.882118 2767 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:17:28.882190 kubelet[2767]: I0911 00:17:28.882138 2767 policy_none.go:49] "None policy: Start" Sep 11 00:17:28.882190 kubelet[2767]: I0911 00:17:28.882147 2767 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:17:28.882190 kubelet[2767]: I0911 00:17:28.882158 2767 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:17:28.882281 kubelet[2767]: I0911 00:17:28.882274 2767 state_mem.go:75] "Updated machine memory state" Sep 11 00:17:28.886437 kubelet[2767]: I0911 00:17:28.886398 2767 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:17:28.886647 kubelet[2767]: I0911 00:17:28.886624 2767 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:17:28.886684 kubelet[2767]: I0911 00:17:28.886642 2767 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:17:28.887173 kubelet[2767]: I0911 00:17:28.887150 2767 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:17:28.888191 kubelet[2767]: E0911 00:17:28.888143 2767 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:17:28.956452 kubelet[2767]: I0911 00:17:28.956399 2767 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:28.956452 kubelet[2767]: I0911 00:17:28.956448 2767 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:28.957052 kubelet[2767]: I0911 00:17:28.956399 2767 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:28.992221 kubelet[2767]: I0911 00:17:28.992169 2767 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:17:29.141238 kubelet[2767]: I0911 00:17:29.141104 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:29.141238 kubelet[2767]: I0911 00:17:29.141188 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:29.141500 kubelet[2767]: I0911 00:17:29.141266 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97bf372134c3c5d28d4c32e8e2823ef2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"97bf372134c3c5d28d4c32e8e2823ef2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:29.141500 kubelet[2767]: I0911 00:17:29.141293 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97bf372134c3c5d28d4c32e8e2823ef2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"97bf372134c3c5d28d4c32e8e2823ef2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:29.141500 kubelet[2767]: I0911 00:17:29.141318 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:29.141500 kubelet[2767]: I0911 00:17:29.141343 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:29.141500 kubelet[2767]: I0911 00:17:29.141419 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97bf372134c3c5d28d4c32e8e2823ef2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"97bf372134c3c5d28d4c32e8e2823ef2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:17:29.141661 kubelet[2767]: I0911 00:17:29.141484 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:29.141661 kubelet[2767]: I0911 00:17:29.141512 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:17:29.458644 kubelet[2767]: E0911 00:17:29.458481 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:29.458644 kubelet[2767]: E0911 00:17:29.458501 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:29.828689 kubelet[2767]: I0911 00:17:29.828574 2767 apiserver.go:52] "Watching apiserver" Sep 11 00:17:29.840125 kubelet[2767]: I0911 00:17:29.840044 2767 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:17:29.842292 kubelet[2767]: E0911 00:17:29.842057 2767 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:17:29.842414 kubelet[2767]: E0911 00:17:29.842301 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:29.869943 kubelet[2767]: E0911 00:17:29.869890 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:29.870104 kubelet[2767]: E0911 00:17:29.870016 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:29.870143 kubelet[2767]: E0911 00:17:29.870112 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:30.197434 kubelet[2767]: I0911 00:17:30.196756 2767 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 00:17:30.197434 kubelet[2767]: I0911 00:17:30.196914 2767 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:17:30.256507 kubelet[2767]: I0911 00:17:30.256425 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.256400323 podStartE2EDuration="3.256400323s" podCreationTimestamp="2025-09-11 00:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:17:30.252749445 +0000 UTC m=+1.500995020" watchObservedRunningTime="2025-09-11 00:17:30.256400323 +0000 UTC m=+1.504645908" Sep 11 00:17:30.672291 kubelet[2767]: I0911 00:17:30.671026 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.671000459 podStartE2EDuration="2.671000459s" podCreationTimestamp="2025-09-11 00:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:17:30.597491207 +0000 UTC m=+1.845736792" watchObservedRunningTime="2025-09-11 00:17:30.671000459 +0000 UTC m=+1.919246044" Sep 11 00:17:30.871977 kubelet[2767]: E0911 00:17:30.871935 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:30.872583 kubelet[2767]: E0911 00:17:30.872074 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:31.253164 kubelet[2767]: I0911 00:17:31.252916 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.252891859 podStartE2EDuration="3.252891859s" podCreationTimestamp="2025-09-11 00:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:17:30.671142731 +0000 UTC m=+1.919388326" watchObservedRunningTime="2025-09-11 00:17:31.252891859 +0000 UTC m=+2.501137444" Sep 11 00:17:32.614771 kubelet[2767]: E0911 00:17:32.614571 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:32.876938 kubelet[2767]: E0911 00:17:32.875945 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:33.930413 kubelet[2767]: E0911 00:17:33.930354 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:34.818312 kubelet[2767]: I0911 00:17:34.818242 2767 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:17:34.818594 containerd[1588]: time="2025-09-11T00:17:34.818554444Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:17:34.818938 kubelet[2767]: I0911 00:17:34.818767 2767 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:17:34.880187 kubelet[2767]: E0911 00:17:34.880132 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:34.974867 systemd[1]: Created slice kubepods-besteffort-pode737b146_5e61_43f5_a2bf_7245477e7595.slice - libcontainer container kubepods-besteffort-pode737b146_5e61_43f5_a2bf_7245477e7595.slice. Sep 11 00:17:34.978890 kubelet[2767]: I0911 00:17:34.978828 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e737b146-5e61-43f5-a2bf-7245477e7595-kube-proxy\") pod \"kube-proxy-5x8z7\" (UID: \"e737b146-5e61-43f5-a2bf-7245477e7595\") " pod="kube-system/kube-proxy-5x8z7" Sep 11 00:17:34.979500 kubelet[2767]: I0911 00:17:34.979465 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e737b146-5e61-43f5-a2bf-7245477e7595-lib-modules\") pod \"kube-proxy-5x8z7\" (UID: \"e737b146-5e61-43f5-a2bf-7245477e7595\") " pod="kube-system/kube-proxy-5x8z7" Sep 11 00:17:34.979554 kubelet[2767]: I0911 00:17:34.979506 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e737b146-5e61-43f5-a2bf-7245477e7595-xtables-lock\") pod \"kube-proxy-5x8z7\" (UID: \"e737b146-5e61-43f5-a2bf-7245477e7595\") " pod="kube-system/kube-proxy-5x8z7" Sep 11 00:17:34.979554 kubelet[2767]: I0911 00:17:34.979535 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtkvq\" (UniqueName: \"kubernetes.io/projected/e737b146-5e61-43f5-a2bf-7245477e7595-kube-api-access-mtkvq\") pod \"kube-proxy-5x8z7\" (UID: \"e737b146-5e61-43f5-a2bf-7245477e7595\") " pod="kube-system/kube-proxy-5x8z7" Sep 11 00:17:35.099705 kubelet[2767]: E0911 00:17:35.099154 2767 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 11 00:17:35.099705 kubelet[2767]: E0911 00:17:35.099215 2767 projected.go:194] Error preparing data for projected volume kube-api-access-mtkvq for pod kube-system/kube-proxy-5x8z7: configmap "kube-root-ca.crt" not found Sep 11 00:17:35.100864 kubelet[2767]: E0911 00:17:35.100134 2767 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e737b146-5e61-43f5-a2bf-7245477e7595-kube-api-access-mtkvq podName:e737b146-5e61-43f5-a2bf-7245477e7595 nodeName:}" failed. No retries permitted until 2025-09-11 00:17:35.600085914 +0000 UTC m=+6.848331499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mtkvq" (UniqueName: "kubernetes.io/projected/e737b146-5e61-43f5-a2bf-7245477e7595-kube-api-access-mtkvq") pod "kube-proxy-5x8z7" (UID: "e737b146-5e61-43f5-a2bf-7245477e7595") : configmap "kube-root-ca.crt" not found Sep 11 00:17:35.882252 kubelet[2767]: E0911 00:17:35.882012 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:35.895417 kubelet[2767]: E0911 00:17:35.895378 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:35.896098 containerd[1588]: time="2025-09-11T00:17:35.896042057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5x8z7,Uid:e737b146-5e61-43f5-a2bf-7245477e7595,Namespace:kube-system,Attempt:0,}" Sep 11 00:17:36.536932 containerd[1588]: time="2025-09-11T00:17:36.536865385Z" level=info msg="connecting to shim 0d964baa36bde2e21c150a485cd4ea7ea69cdb79ca9f2d67b56ff29fa2098898" address="unix:///run/containerd/s/7c90e221f2eeb3da2d81622f9db0d349fc2c0ba9267b5226cb9119936a980b55" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:36.568364 systemd[1]: Started cri-containerd-0d964baa36bde2e21c150a485cd4ea7ea69cdb79ca9f2d67b56ff29fa2098898.scope - libcontainer container 0d964baa36bde2e21c150a485cd4ea7ea69cdb79ca9f2d67b56ff29fa2098898. Sep 11 00:17:36.624288 containerd[1588]: time="2025-09-11T00:17:36.624182990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5x8z7,Uid:e737b146-5e61-43f5-a2bf-7245477e7595,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d964baa36bde2e21c150a485cd4ea7ea69cdb79ca9f2d67b56ff29fa2098898\"" Sep 11 00:17:36.625262 kubelet[2767]: E0911 00:17:36.625188 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:36.628215 containerd[1588]: time="2025-09-11T00:17:36.628154074Z" level=info msg="CreateContainer within sandbox \"0d964baa36bde2e21c150a485cd4ea7ea69cdb79ca9f2d67b56ff29fa2098898\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:17:36.690702 systemd[1]: Created slice kubepods-besteffort-pod2c7b5eae_72c5_4471_9ceb_95397005d172.slice - libcontainer container kubepods-besteffort-pod2c7b5eae_72c5_4471_9ceb_95397005d172.slice. Sep 11 00:17:36.691826 kubelet[2767]: I0911 00:17:36.691637 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c7b5eae-72c5-4471-9ceb-95397005d172-var-lib-calico\") pod \"tigera-operator-755d956888-l2vsl\" (UID: \"2c7b5eae-72c5-4471-9ceb-95397005d172\") " pod="tigera-operator/tigera-operator-755d956888-l2vsl" Sep 11 00:17:36.691826 kubelet[2767]: I0911 00:17:36.691666 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crbm\" (UniqueName: \"kubernetes.io/projected/2c7b5eae-72c5-4471-9ceb-95397005d172-kube-api-access-7crbm\") pod \"tigera-operator-755d956888-l2vsl\" (UID: \"2c7b5eae-72c5-4471-9ceb-95397005d172\") " pod="tigera-operator/tigera-operator-755d956888-l2vsl" Sep 11 00:17:36.797445 containerd[1588]: time="2025-09-11T00:17:36.797118595Z" level=info msg="Container f6abc8421560f3d806f992fa7726c51043bdbb4e026fd3c8706988841330c8ec: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:36.801700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3764080581.mount: Deactivated successfully. Sep 11 00:17:36.994781 containerd[1588]: time="2025-09-11T00:17:36.994722952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-l2vsl,Uid:2c7b5eae-72c5-4471-9ceb-95397005d172,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:17:37.528757 containerd[1588]: time="2025-09-11T00:17:37.528707755Z" level=info msg="CreateContainer within sandbox \"0d964baa36bde2e21c150a485cd4ea7ea69cdb79ca9f2d67b56ff29fa2098898\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f6abc8421560f3d806f992fa7726c51043bdbb4e026fd3c8706988841330c8ec\"" Sep 11 00:17:37.529475 containerd[1588]: time="2025-09-11T00:17:37.529450772Z" level=info msg="StartContainer for \"f6abc8421560f3d806f992fa7726c51043bdbb4e026fd3c8706988841330c8ec\"" Sep 11 00:17:37.530909 containerd[1588]: time="2025-09-11T00:17:37.530865374Z" level=info msg="connecting to shim f6abc8421560f3d806f992fa7726c51043bdbb4e026fd3c8706988841330c8ec" address="unix:///run/containerd/s/7c90e221f2eeb3da2d81622f9db0d349fc2c0ba9267b5226cb9119936a980b55" protocol=ttrpc version=3 Sep 11 00:17:37.554416 systemd[1]: Started cri-containerd-f6abc8421560f3d806f992fa7726c51043bdbb4e026fd3c8706988841330c8ec.scope - libcontainer container f6abc8421560f3d806f992fa7726c51043bdbb4e026fd3c8706988841330c8ec. Sep 11 00:17:37.720659 containerd[1588]: time="2025-09-11T00:17:37.720593390Z" level=info msg="StartContainer for \"f6abc8421560f3d806f992fa7726c51043bdbb4e026fd3c8706988841330c8ec\" returns successfully" Sep 11 00:17:37.743952 kubelet[2767]: E0911 00:17:37.743905 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:37.888727 kubelet[2767]: E0911 00:17:37.888579 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:37.888727 kubelet[2767]: E0911 00:17:37.888633 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:38.036119 containerd[1588]: time="2025-09-11T00:17:38.036047913Z" level=info msg="connecting to shim 0f3c022b153b20dbf7d8f88fc0370b5a19d596923e39b7b13ea2b3174b2175c7" address="unix:///run/containerd/s/3c473698ca256cbcf31d07b47b023136194fb84581b878fe34119ef3053fe605" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:38.063728 systemd[1]: Started cri-containerd-0f3c022b153b20dbf7d8f88fc0370b5a19d596923e39b7b13ea2b3174b2175c7.scope - libcontainer container 0f3c022b153b20dbf7d8f88fc0370b5a19d596923e39b7b13ea2b3174b2175c7. Sep 11 00:17:38.141383 kubelet[2767]: I0911 00:17:38.140857 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5x8z7" podStartSLOduration=4.140830275 podStartE2EDuration="4.140830275s" podCreationTimestamp="2025-09-11 00:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:17:38.140820766 +0000 UTC m=+9.389066351" watchObservedRunningTime="2025-09-11 00:17:38.140830275 +0000 UTC m=+9.389075860" Sep 11 00:17:38.190431 containerd[1588]: time="2025-09-11T00:17:38.190362730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-l2vsl,Uid:2c7b5eae-72c5-4471-9ceb-95397005d172,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0f3c022b153b20dbf7d8f88fc0370b5a19d596923e39b7b13ea2b3174b2175c7\"" Sep 11 00:17:38.192299 containerd[1588]: time="2025-09-11T00:17:38.192246210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:17:38.891019 kubelet[2767]: E0911 00:17:38.890982 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:43.176727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117088844.mount: Deactivated successfully. Sep 11 00:17:43.590436 containerd[1588]: time="2025-09-11T00:17:43.590357140Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:43.592355 containerd[1588]: time="2025-09-11T00:17:43.592310133Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:17:43.593934 containerd[1588]: time="2025-09-11T00:17:43.593881657Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:43.597369 containerd[1588]: time="2025-09-11T00:17:43.597321743Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:17:43.598233 containerd[1588]: time="2025-09-11T00:17:43.598094565Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.405799767s" Sep 11 00:17:43.598233 containerd[1588]: time="2025-09-11T00:17:43.598148843Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:17:43.602658 containerd[1588]: time="2025-09-11T00:17:43.602613161Z" level=info msg="CreateContainer within sandbox \"0f3c022b153b20dbf7d8f88fc0370b5a19d596923e39b7b13ea2b3174b2175c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:17:43.625041 containerd[1588]: time="2025-09-11T00:17:43.624979507Z" level=info msg="Container 8f1cbadde076e91a17e1906b61f378c1d72337aeeb70e7f6a5c2cb9eb1d896dd: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:17:43.633927 containerd[1588]: time="2025-09-11T00:17:43.633844019Z" level=info msg="CreateContainer within sandbox \"0f3c022b153b20dbf7d8f88fc0370b5a19d596923e39b7b13ea2b3174b2175c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f1cbadde076e91a17e1906b61f378c1d72337aeeb70e7f6a5c2cb9eb1d896dd\"" Sep 11 00:17:43.634511 containerd[1588]: time="2025-09-11T00:17:43.634470147Z" level=info msg="StartContainer for \"8f1cbadde076e91a17e1906b61f378c1d72337aeeb70e7f6a5c2cb9eb1d896dd\"" Sep 11 00:17:43.635444 containerd[1588]: time="2025-09-11T00:17:43.635416944Z" level=info msg="connecting to shim 8f1cbadde076e91a17e1906b61f378c1d72337aeeb70e7f6a5c2cb9eb1d896dd" address="unix:///run/containerd/s/3c473698ca256cbcf31d07b47b023136194fb84581b878fe34119ef3053fe605" protocol=ttrpc version=3 Sep 11 00:17:43.702507 systemd[1]: Started cri-containerd-8f1cbadde076e91a17e1906b61f378c1d72337aeeb70e7f6a5c2cb9eb1d896dd.scope - libcontainer container 8f1cbadde076e91a17e1906b61f378c1d72337aeeb70e7f6a5c2cb9eb1d896dd. Sep 11 00:17:43.745994 containerd[1588]: time="2025-09-11T00:17:43.745954794Z" level=info msg="StartContainer for \"8f1cbadde076e91a17e1906b61f378c1d72337aeeb70e7f6a5c2cb9eb1d896dd\" returns successfully" Sep 11 00:17:51.512311 sudo[1786]: pam_unix(sudo:session): session closed for user root Sep 11 00:17:51.514502 sshd[1785]: Connection closed by 10.0.0.1 port 42390 Sep 11 00:17:51.516111 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Sep 11 00:17:51.521930 systemd[1]: sshd@6-10.0.0.70:22-10.0.0.1:42390.service: Deactivated successfully. Sep 11 00:17:51.526778 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:17:51.527224 systemd[1]: session-7.scope: Consumed 5.850s CPU time, 229.4M memory peak. Sep 11 00:17:51.529974 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:17:51.532595 systemd-logind[1564]: Removed session 7. Sep 11 00:17:55.471739 kubelet[2767]: I0911 00:17:55.471643 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-l2vsl" podStartSLOduration=14.064292371 podStartE2EDuration="19.471621308s" podCreationTimestamp="2025-09-11 00:17:36 +0000 UTC" firstStartedPulling="2025-09-11 00:17:38.191687775 +0000 UTC m=+9.439933360" lastFinishedPulling="2025-09-11 00:17:43.599016712 +0000 UTC m=+14.847262297" observedRunningTime="2025-09-11 00:17:43.92171567 +0000 UTC m=+15.169961255" watchObservedRunningTime="2025-09-11 00:17:55.471621308 +0000 UTC m=+26.719866893" Sep 11 00:17:55.489552 systemd[1]: Created slice kubepods-besteffort-poda3dd40b0_451a_4998_ae1c_ae43d5850464.slice - libcontainer container kubepods-besteffort-poda3dd40b0_451a_4998_ae1c_ae43d5850464.slice. Sep 11 00:17:55.512081 kubelet[2767]: I0911 00:17:55.512013 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a3dd40b0-451a-4998-ae1c-ae43d5850464-typha-certs\") pod \"calico-typha-58dcfbc5f9-lpnrj\" (UID: \"a3dd40b0-451a-4998-ae1c-ae43d5850464\") " pod="calico-system/calico-typha-58dcfbc5f9-lpnrj" Sep 11 00:17:55.512081 kubelet[2767]: I0911 00:17:55.512067 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp28f\" (UniqueName: \"kubernetes.io/projected/a3dd40b0-451a-4998-ae1c-ae43d5850464-kube-api-access-pp28f\") pod \"calico-typha-58dcfbc5f9-lpnrj\" (UID: \"a3dd40b0-451a-4998-ae1c-ae43d5850464\") " pod="calico-system/calico-typha-58dcfbc5f9-lpnrj" Sep 11 00:17:55.512081 kubelet[2767]: I0911 00:17:55.512091 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3dd40b0-451a-4998-ae1c-ae43d5850464-tigera-ca-bundle\") pod \"calico-typha-58dcfbc5f9-lpnrj\" (UID: \"a3dd40b0-451a-4998-ae1c-ae43d5850464\") " pod="calico-system/calico-typha-58dcfbc5f9-lpnrj" Sep 11 00:17:55.584436 systemd[1]: Created slice kubepods-besteffort-pod44cf6ecf_a2ab_4edc_ad94_73870a568d82.slice - libcontainer container kubepods-besteffort-pod44cf6ecf_a2ab_4edc_ad94_73870a568d82.slice. Sep 11 00:17:55.613431 kubelet[2767]: I0911 00:17:55.613356 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf8pv\" (UniqueName: \"kubernetes.io/projected/44cf6ecf-a2ab-4edc-ad94-73870a568d82-kube-api-access-hf8pv\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613431 kubelet[2767]: I0911 00:17:55.613432 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-lib-modules\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613713 kubelet[2767]: I0911 00:17:55.613551 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-cni-log-dir\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613713 kubelet[2767]: I0911 00:17:55.613615 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-flexvol-driver-host\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613713 kubelet[2767]: I0911 00:17:55.613646 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44cf6ecf-a2ab-4edc-ad94-73870a568d82-tigera-ca-bundle\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613713 kubelet[2767]: I0911 00:17:55.613668 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-var-lib-calico\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613967 kubelet[2767]: I0911 00:17:55.613723 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-cni-bin-dir\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613967 kubelet[2767]: I0911 00:17:55.613749 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/44cf6ecf-a2ab-4edc-ad94-73870a568d82-node-certs\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613967 kubelet[2767]: I0911 00:17:55.613778 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-xtables-lock\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613967 kubelet[2767]: I0911 00:17:55.613803 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-var-run-calico\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.613967 kubelet[2767]: I0911 00:17:55.613860 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-policysync\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.614128 kubelet[2767]: I0911 00:17:55.613882 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/44cf6ecf-a2ab-4edc-ad94-73870a568d82-cni-net-dir\") pod \"calico-node-gzm44\" (UID: \"44cf6ecf-a2ab-4edc-ad94-73870a568d82\") " pod="calico-system/calico-node-gzm44" Sep 11 00:17:55.693073 kubelet[2767]: E0911 00:17:55.693000 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:17:55.714957 kubelet[2767]: I0911 00:17:55.714807 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc809b78-c1d4-448d-9695-d5c095a31b8f-kubelet-dir\") pod \"csi-node-driver-l8vjl\" (UID: \"cc809b78-c1d4-448d-9695-d5c095a31b8f\") " pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:17:55.714957 kubelet[2767]: I0911 00:17:55.714878 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cc809b78-c1d4-448d-9695-d5c095a31b8f-socket-dir\") pod \"csi-node-driver-l8vjl\" (UID: \"cc809b78-c1d4-448d-9695-d5c095a31b8f\") " pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:17:55.714957 kubelet[2767]: I0911 00:17:55.714899 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbkb\" (UniqueName: \"kubernetes.io/projected/cc809b78-c1d4-448d-9695-d5c095a31b8f-kube-api-access-zkbkb\") pod \"csi-node-driver-l8vjl\" (UID: \"cc809b78-c1d4-448d-9695-d5c095a31b8f\") " pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:17:55.715692 kubelet[2767]: I0911 00:17:55.715614 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cc809b78-c1d4-448d-9695-d5c095a31b8f-registration-dir\") pod \"csi-node-driver-l8vjl\" (UID: \"cc809b78-c1d4-448d-9695-d5c095a31b8f\") " pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:17:55.715840 kubelet[2767]: I0911 00:17:55.715817 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cc809b78-c1d4-448d-9695-d5c095a31b8f-varrun\") pod \"csi-node-driver-l8vjl\" (UID: \"cc809b78-c1d4-448d-9695-d5c095a31b8f\") " pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:17:55.719941 kubelet[2767]: E0911 00:17:55.719892 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.719941 kubelet[2767]: W0911 00:17:55.719917 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.719941 kubelet[2767]: E0911 00:17:55.719954 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.727449 kubelet[2767]: E0911 00:17:55.727340 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.727449 kubelet[2767]: W0911 00:17:55.727360 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.727449 kubelet[2767]: E0911 00:17:55.727380 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.794907 kubelet[2767]: E0911 00:17:55.794849 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:55.795710 containerd[1588]: time="2025-09-11T00:17:55.795635654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58dcfbc5f9-lpnrj,Uid:a3dd40b0-451a-4998-ae1c-ae43d5850464,Namespace:calico-system,Attempt:0,}" Sep 11 00:17:55.817418 kubelet[2767]: E0911 00:17:55.817371 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.817418 kubelet[2767]: W0911 00:17:55.817402 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.817418 kubelet[2767]: E0911 00:17:55.817425 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.817801 kubelet[2767]: E0911 00:17:55.817754 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.817801 kubelet[2767]: W0911 00:17:55.817793 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.817886 kubelet[2767]: E0911 00:17:55.817836 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.818260 kubelet[2767]: E0911 00:17:55.818232 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.818260 kubelet[2767]: W0911 00:17:55.818253 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.818369 kubelet[2767]: E0911 00:17:55.818290 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.818639 kubelet[2767]: E0911 00:17:55.818617 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.818639 kubelet[2767]: W0911 00:17:55.818635 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.818751 kubelet[2767]: E0911 00:17:55.818657 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.818975 kubelet[2767]: E0911 00:17:55.818943 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.818975 kubelet[2767]: W0911 00:17:55.818961 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.819088 kubelet[2767]: E0911 00:17:55.818982 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.819314 kubelet[2767]: E0911 00:17:55.819292 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.819314 kubelet[2767]: W0911 00:17:55.819311 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.819383 kubelet[2767]: E0911 00:17:55.819332 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.819573 kubelet[2767]: E0911 00:17:55.819550 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.819573 kubelet[2767]: W0911 00:17:55.819564 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.819702 kubelet[2767]: E0911 00:17:55.819577 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.819763 kubelet[2767]: E0911 00:17:55.819746 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.819763 kubelet[2767]: W0911 00:17:55.819759 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.819855 kubelet[2767]: E0911 00:17:55.819781 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.820543 kubelet[2767]: E0911 00:17:55.820520 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.820543 kubelet[2767]: W0911 00:17:55.820541 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.820543 kubelet[2767]: E0911 00:17:55.820559 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.820793 kubelet[2767]: E0911 00:17:55.820772 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.820793 kubelet[2767]: W0911 00:17:55.820785 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.820972 kubelet[2767]: E0911 00:17:55.820832 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.820972 kubelet[2767]: E0911 00:17:55.820942 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.820972 kubelet[2767]: W0911 00:17:55.820950 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.821165 kubelet[2767]: E0911 00:17:55.820998 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.821392 kubelet[2767]: E0911 00:17:55.821375 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.821392 kubelet[2767]: W0911 00:17:55.821388 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.821517 kubelet[2767]: E0911 00:17:55.821496 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.821694 kubelet[2767]: E0911 00:17:55.821679 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.821694 kubelet[2767]: W0911 00:17:55.821689 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.821824 kubelet[2767]: E0911 00:17:55.821778 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.821925 kubelet[2767]: E0911 00:17:55.821910 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.821925 kubelet[2767]: W0911 00:17:55.821921 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.822005 kubelet[2767]: E0911 00:17:55.821951 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.822123 kubelet[2767]: E0911 00:17:55.822104 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.822123 kubelet[2767]: W0911 00:17:55.822120 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.822246 kubelet[2767]: E0911 00:17:55.822193 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.822468 kubelet[2767]: E0911 00:17:55.822358 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.822468 kubelet[2767]: W0911 00:17:55.822370 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.822468 kubelet[2767]: E0911 00:17:55.822408 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.822652 kubelet[2767]: E0911 00:17:55.822624 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.822652 kubelet[2767]: W0911 00:17:55.822635 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.822652 kubelet[2767]: E0911 00:17:55.822649 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.822909 kubelet[2767]: E0911 00:17:55.822877 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.822909 kubelet[2767]: W0911 00:17:55.822887 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.822909 kubelet[2767]: E0911 00:17:55.822906 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.823258 kubelet[2767]: E0911 00:17:55.823229 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.823258 kubelet[2767]: W0911 00:17:55.823254 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.823350 kubelet[2767]: E0911 00:17:55.823281 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.823557 kubelet[2767]: E0911 00:17:55.823535 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.823557 kubelet[2767]: W0911 00:17:55.823552 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.823646 kubelet[2767]: E0911 00:17:55.823572 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.825357 kubelet[2767]: E0911 00:17:55.825329 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.825357 kubelet[2767]: W0911 00:17:55.825348 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.825465 kubelet[2767]: E0911 00:17:55.825372 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.825737 kubelet[2767]: E0911 00:17:55.825711 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.825737 kubelet[2767]: W0911 00:17:55.825731 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.825830 kubelet[2767]: E0911 00:17:55.825756 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.826072 kubelet[2767]: E0911 00:17:55.826049 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.826072 kubelet[2767]: W0911 00:17:55.826066 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.826182 kubelet[2767]: E0911 00:17:55.826092 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.827641 kubelet[2767]: E0911 00:17:55.827618 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.827641 kubelet[2767]: W0911 00:17:55.827634 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.827765 kubelet[2767]: E0911 00:17:55.827731 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.828235 kubelet[2767]: E0911 00:17:55.828176 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.828235 kubelet[2767]: W0911 00:17:55.828196 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.828235 kubelet[2767]: E0911 00:17:55.828227 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.843279 kubelet[2767]: E0911 00:17:55.843225 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:17:55.843279 kubelet[2767]: W0911 00:17:55.843262 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:17:55.843491 kubelet[2767]: E0911 00:17:55.843291 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:17:55.858833 containerd[1588]: time="2025-09-11T00:17:55.858575001Z" level=info msg="connecting to shim 44d96462f20b6194cbfad9744519a94788e6d9338f42a06f98444206ded18004" address="unix:///run/containerd/s/fe3f0e89f90fd11cf560a95b7497421bd348f81cd87f53977470d83780321e00" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:55.891224 containerd[1588]: time="2025-09-11T00:17:55.891122500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gzm44,Uid:44cf6ecf-a2ab-4edc-ad94-73870a568d82,Namespace:calico-system,Attempt:0,}" Sep 11 00:17:55.894581 systemd[1]: Started cri-containerd-44d96462f20b6194cbfad9744519a94788e6d9338f42a06f98444206ded18004.scope - libcontainer container 44d96462f20b6194cbfad9744519a94788e6d9338f42a06f98444206ded18004. Sep 11 00:17:55.931882 containerd[1588]: time="2025-09-11T00:17:55.931807609Z" level=info msg="connecting to shim 619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae" address="unix:///run/containerd/s/9bc551c33c828fe127470dbd8406362da4bc12c97b7f400493a5434b8501b439" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:17:55.970847 systemd[1]: Started cri-containerd-619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae.scope - libcontainer container 619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae. Sep 11 00:17:55.975308 containerd[1588]: time="2025-09-11T00:17:55.975232000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58dcfbc5f9-lpnrj,Uid:a3dd40b0-451a-4998-ae1c-ae43d5850464,Namespace:calico-system,Attempt:0,} returns sandbox id \"44d96462f20b6194cbfad9744519a94788e6d9338f42a06f98444206ded18004\"" Sep 11 00:17:55.976617 kubelet[2767]: E0911 00:17:55.976317 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:17:55.977836 containerd[1588]: time="2025-09-11T00:17:55.977720837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:17:56.014610 containerd[1588]: time="2025-09-11T00:17:56.014555062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gzm44,Uid:44cf6ecf-a2ab-4edc-ad94-73870a568d82,Namespace:calico-system,Attempt:0,} returns sandbox id \"619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae\"" Sep 11 00:17:57.856432 kubelet[2767]: E0911 00:17:57.856352 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:17:59.438452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222695201.mount: Deactivated successfully. Sep 11 00:17:59.855953 kubelet[2767]: E0911 00:17:59.855863 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:01.025809 containerd[1588]: time="2025-09-11T00:18:01.025546161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:01.049652 containerd[1588]: time="2025-09-11T00:18:01.049515061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:18:01.075322 containerd[1588]: time="2025-09-11T00:18:01.075251256Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:01.091234 containerd[1588]: time="2025-09-11T00:18:01.091160345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:01.091890 containerd[1588]: time="2025-09-11T00:18:01.091855939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.11409872s" Sep 11 00:18:01.091959 containerd[1588]: time="2025-09-11T00:18:01.091893722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:18:01.093005 containerd[1588]: time="2025-09-11T00:18:01.092976956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:18:01.138935 containerd[1588]: time="2025-09-11T00:18:01.138080720Z" level=info msg="CreateContainer within sandbox \"44d96462f20b6194cbfad9744519a94788e6d9338f42a06f98444206ded18004\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:18:01.252250 containerd[1588]: time="2025-09-11T00:18:01.252147372Z" level=info msg="Container a7867666a682f05504c4f81012050cc77b523c50261f39c740d2f5432d5ef5ec: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:01.274333 containerd[1588]: time="2025-09-11T00:18:01.274268499Z" level=info msg="CreateContainer within sandbox \"44d96462f20b6194cbfad9744519a94788e6d9338f42a06f98444206ded18004\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a7867666a682f05504c4f81012050cc77b523c50261f39c740d2f5432d5ef5ec\"" Sep 11 00:18:01.275417 containerd[1588]: time="2025-09-11T00:18:01.275362985Z" level=info msg="StartContainer for \"a7867666a682f05504c4f81012050cc77b523c50261f39c740d2f5432d5ef5ec\"" Sep 11 00:18:01.277120 containerd[1588]: time="2025-09-11T00:18:01.276959965Z" level=info msg="connecting to shim a7867666a682f05504c4f81012050cc77b523c50261f39c740d2f5432d5ef5ec" address="unix:///run/containerd/s/fe3f0e89f90fd11cf560a95b7497421bd348f81cd87f53977470d83780321e00" protocol=ttrpc version=3 Sep 11 00:18:01.307441 systemd[1]: Started cri-containerd-a7867666a682f05504c4f81012050cc77b523c50261f39c740d2f5432d5ef5ec.scope - libcontainer container a7867666a682f05504c4f81012050cc77b523c50261f39c740d2f5432d5ef5ec. Sep 11 00:18:01.378887 containerd[1588]: time="2025-09-11T00:18:01.378807453Z" level=info msg="StartContainer for \"a7867666a682f05504c4f81012050cc77b523c50261f39c740d2f5432d5ef5ec\" returns successfully" Sep 11 00:18:01.856169 kubelet[2767]: E0911 00:18:01.856097 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:01.948979 kubelet[2767]: E0911 00:18:01.948926 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:02.041947 kubelet[2767]: E0911 00:18:02.041631 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.041947 kubelet[2767]: W0911 00:18:02.041725 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.041947 kubelet[2767]: E0911 00:18:02.041760 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.042349 kubelet[2767]: E0911 00:18:02.042027 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.042349 kubelet[2767]: W0911 00:18:02.042040 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.042349 kubelet[2767]: E0911 00:18:02.042052 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.042700 kubelet[2767]: E0911 00:18:02.042663 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.042700 kubelet[2767]: W0911 00:18:02.042679 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.042700 kubelet[2767]: E0911 00:18:02.042692 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.042980 kubelet[2767]: E0911 00:18:02.042962 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.042980 kubelet[2767]: W0911 00:18:02.042976 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.043081 kubelet[2767]: E0911 00:18:02.042987 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.043581 kubelet[2767]: E0911 00:18:02.043563 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.043581 kubelet[2767]: W0911 00:18:02.043579 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.043702 kubelet[2767]: E0911 00:18:02.043592 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.044086 kubelet[2767]: E0911 00:18:02.044061 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.044086 kubelet[2767]: W0911 00:18:02.044077 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.044194 kubelet[2767]: E0911 00:18:02.044089 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.044696 kubelet[2767]: E0911 00:18:02.044675 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.044696 kubelet[2767]: W0911 00:18:02.044694 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.044782 kubelet[2767]: E0911 00:18:02.044709 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.045646 kubelet[2767]: E0911 00:18:02.045582 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.045646 kubelet[2767]: W0911 00:18:02.045597 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.045646 kubelet[2767]: E0911 00:18:02.045609 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.046367 kubelet[2767]: E0911 00:18:02.045836 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.046367 kubelet[2767]: W0911 00:18:02.045860 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.046367 kubelet[2767]: E0911 00:18:02.045872 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.046367 kubelet[2767]: E0911 00:18:02.046114 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.046367 kubelet[2767]: W0911 00:18:02.046127 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.046367 kubelet[2767]: E0911 00:18:02.046139 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.046560 kubelet[2767]: E0911 00:18:02.046405 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.046560 kubelet[2767]: W0911 00:18:02.046417 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.046560 kubelet[2767]: E0911 00:18:02.046429 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.047480 kubelet[2767]: E0911 00:18:02.047456 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.047568 kubelet[2767]: W0911 00:18:02.047502 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.047568 kubelet[2767]: E0911 00:18:02.047519 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.047783 kubelet[2767]: E0911 00:18:02.047753 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.047783 kubelet[2767]: W0911 00:18:02.047774 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.047895 kubelet[2767]: E0911 00:18:02.047791 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.048066 kubelet[2767]: E0911 00:18:02.048044 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.048066 kubelet[2767]: W0911 00:18:02.048058 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.048159 kubelet[2767]: E0911 00:18:02.048069 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.048376 kubelet[2767]: E0911 00:18:02.048294 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.048376 kubelet[2767]: W0911 00:18:02.048311 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.048376 kubelet[2767]: E0911 00:18:02.048322 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.062490 kubelet[2767]: E0911 00:18:02.062429 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.062490 kubelet[2767]: W0911 00:18:02.062468 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.062755 kubelet[2767]: E0911 00:18:02.062525 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.063090 kubelet[2767]: E0911 00:18:02.062980 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.063090 kubelet[2767]: W0911 00:18:02.063005 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.063090 kubelet[2767]: E0911 00:18:02.063022 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.063585 kubelet[2767]: E0911 00:18:02.063480 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.063585 kubelet[2767]: W0911 00:18:02.063502 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.063585 kubelet[2767]: E0911 00:18:02.063525 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.064174 kubelet[2767]: E0911 00:18:02.064051 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.064174 kubelet[2767]: W0911 00:18:02.064077 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.064174 kubelet[2767]: E0911 00:18:02.064095 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.064466 kubelet[2767]: E0911 00:18:02.064370 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.064466 kubelet[2767]: W0911 00:18:02.064391 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.064466 kubelet[2767]: E0911 00:18:02.064416 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.064675 kubelet[2767]: E0911 00:18:02.064652 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.064675 kubelet[2767]: W0911 00:18:02.064670 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.064745 kubelet[2767]: E0911 00:18:02.064690 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.064945 kubelet[2767]: E0911 00:18:02.064922 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.064945 kubelet[2767]: W0911 00:18:02.064938 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.065052 kubelet[2767]: E0911 00:18:02.065012 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.065192 kubelet[2767]: E0911 00:18:02.065174 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.065192 kubelet[2767]: W0911 00:18:02.065188 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.065281 kubelet[2767]: E0911 00:18:02.065240 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.065653 kubelet[2767]: E0911 00:18:02.065404 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.065653 kubelet[2767]: W0911 00:18:02.065426 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.065653 kubelet[2767]: E0911 00:18:02.065470 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.065807 kubelet[2767]: E0911 00:18:02.065778 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.065807 kubelet[2767]: W0911 00:18:02.065795 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.065902 kubelet[2767]: E0911 00:18:02.065810 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.066143 kubelet[2767]: E0911 00:18:02.066086 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.066143 kubelet[2767]: W0911 00:18:02.066102 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.066143 kubelet[2767]: E0911 00:18:02.066124 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.066712 kubelet[2767]: E0911 00:18:02.066689 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.066712 kubelet[2767]: W0911 00:18:02.066705 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.066804 kubelet[2767]: E0911 00:18:02.066725 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.066997 kubelet[2767]: E0911 00:18:02.066979 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.066997 kubelet[2767]: W0911 00:18:02.066993 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.067099 kubelet[2767]: E0911 00:18:02.067072 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.067269 kubelet[2767]: E0911 00:18:02.067232 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.067269 kubelet[2767]: W0911 00:18:02.067257 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.068015 kubelet[2767]: E0911 00:18:02.067521 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.068015 kubelet[2767]: W0911 00:18:02.067536 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.068015 kubelet[2767]: E0911 00:18:02.067547 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.068015 kubelet[2767]: E0911 00:18:02.067576 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.068015 kubelet[2767]: E0911 00:18:02.067764 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.068015 kubelet[2767]: W0911 00:18:02.067772 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.068015 kubelet[2767]: E0911 00:18:02.067799 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.068015 kubelet[2767]: E0911 00:18:02.068015 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.068340 kubelet[2767]: W0911 00:18:02.068029 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.068340 kubelet[2767]: E0911 00:18:02.068041 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.068340 kubelet[2767]: E0911 00:18:02.068260 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.068340 kubelet[2767]: W0911 00:18:02.068271 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.068340 kubelet[2767]: E0911 00:18:02.068282 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.699150 containerd[1588]: time="2025-09-11T00:18:02.699062092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:02.733310 containerd[1588]: time="2025-09-11T00:18:02.733220355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:18:02.768076 containerd[1588]: time="2025-09-11T00:18:02.767995327Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:02.839161 containerd[1588]: time="2025-09-11T00:18:02.839060951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:02.840174 containerd[1588]: time="2025-09-11T00:18:02.840125176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.747117611s" Sep 11 00:18:02.840174 containerd[1588]: time="2025-09-11T00:18:02.840175766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:18:02.843188 containerd[1588]: time="2025-09-11T00:18:02.843125054Z" level=info msg="CreateContainer within sandbox \"619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:18:02.951566 kubelet[2767]: I0911 00:18:02.951187 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:18:02.952159 kubelet[2767]: E0911 00:18:02.951731 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:02.955080 kubelet[2767]: E0911 00:18:02.955046 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.955080 kubelet[2767]: W0911 00:18:02.955074 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.955800 kubelet[2767]: E0911 00:18:02.955100 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.956061 kubelet[2767]: E0911 00:18:02.956037 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.956061 kubelet[2767]: W0911 00:18:02.956053 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.957009 kubelet[2767]: E0911 00:18:02.956068 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.957009 kubelet[2767]: E0911 00:18:02.956302 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.957009 kubelet[2767]: W0911 00:18:02.956313 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.957009 kubelet[2767]: E0911 00:18:02.956323 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.957009 kubelet[2767]: E0911 00:18:02.956551 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.957009 kubelet[2767]: W0911 00:18:02.956563 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.957009 kubelet[2767]: E0911 00:18:02.956577 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.957009 kubelet[2767]: E0911 00:18:02.956801 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.957009 kubelet[2767]: W0911 00:18:02.956812 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.957009 kubelet[2767]: E0911 00:18:02.956827 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.957359 kubelet[2767]: E0911 00:18:02.957074 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.957359 kubelet[2767]: W0911 00:18:02.957085 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.957359 kubelet[2767]: E0911 00:18:02.957095 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.957451 kubelet[2767]: E0911 00:18:02.957393 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.957451 kubelet[2767]: W0911 00:18:02.957404 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.957451 kubelet[2767]: E0911 00:18:02.957415 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.957787 kubelet[2767]: E0911 00:18:02.957753 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.957787 kubelet[2767]: W0911 00:18:02.957770 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.957787 kubelet[2767]: E0911 00:18:02.957782 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.958478 kubelet[2767]: E0911 00:18:02.958441 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.958478 kubelet[2767]: W0911 00:18:02.958470 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.959690 kubelet[2767]: E0911 00:18:02.958497 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.959690 kubelet[2767]: E0911 00:18:02.958729 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.959690 kubelet[2767]: W0911 00:18:02.958738 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.959690 kubelet[2767]: E0911 00:18:02.958747 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.959690 kubelet[2767]: E0911 00:18:02.958940 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.959690 kubelet[2767]: W0911 00:18:02.958951 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.959690 kubelet[2767]: E0911 00:18:02.958961 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.959690 kubelet[2767]: E0911 00:18:02.959186 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.959690 kubelet[2767]: W0911 00:18:02.959221 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.959690 kubelet[2767]: E0911 00:18:02.959233 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.960170 kubelet[2767]: E0911 00:18:02.959453 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.960170 kubelet[2767]: W0911 00:18:02.959462 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.960170 kubelet[2767]: E0911 00:18:02.959470 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.960170 kubelet[2767]: E0911 00:18:02.959757 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.960170 kubelet[2767]: W0911 00:18:02.959770 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.960170 kubelet[2767]: E0911 00:18:02.959782 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.960170 kubelet[2767]: E0911 00:18:02.960053 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.960170 kubelet[2767]: W0911 00:18:02.960065 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.960170 kubelet[2767]: E0911 00:18:02.960077 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.970477 kubelet[2767]: E0911 00:18:02.970411 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.970477 kubelet[2767]: W0911 00:18:02.970447 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.970477 kubelet[2767]: E0911 00:18:02.970475 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.970849 kubelet[2767]: E0911 00:18:02.970817 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.970849 kubelet[2767]: W0911 00:18:02.970829 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.970940 kubelet[2767]: E0911 00:18:02.970849 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.971174 kubelet[2767]: E0911 00:18:02.971143 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.971174 kubelet[2767]: W0911 00:18:02.971159 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.971274 kubelet[2767]: E0911 00:18:02.971182 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.971480 kubelet[2767]: E0911 00:18:02.971461 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.971480 kubelet[2767]: W0911 00:18:02.971476 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.971557 kubelet[2767]: E0911 00:18:02.971497 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.972013 kubelet[2767]: E0911 00:18:02.971950 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.972059 kubelet[2767]: W0911 00:18:02.972015 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.972260 kubelet[2767]: E0911 00:18:02.972185 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.972766 kubelet[2767]: E0911 00:18:02.972537 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.972766 kubelet[2767]: W0911 00:18:02.972560 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.972766 kubelet[2767]: E0911 00:18:02.972736 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.973102 kubelet[2767]: E0911 00:18:02.972937 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.973102 kubelet[2767]: W0911 00:18:02.972948 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.973102 kubelet[2767]: E0911 00:18:02.973002 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.973193 kubelet[2767]: E0911 00:18:02.973168 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.973193 kubelet[2767]: W0911 00:18:02.973178 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.973285 kubelet[2767]: E0911 00:18:02.973236 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.973589 kubelet[2767]: E0911 00:18:02.973503 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.973589 kubelet[2767]: W0911 00:18:02.973523 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.973589 kubelet[2767]: E0911 00:18:02.973539 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.974678 kubelet[2767]: E0911 00:18:02.974020 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.974678 kubelet[2767]: W0911 00:18:02.974448 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.974973 kubelet[2767]: E0911 00:18:02.974809 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.975368 kubelet[2767]: E0911 00:18:02.975164 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.975368 kubelet[2767]: W0911 00:18:02.975185 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.975368 kubelet[2767]: E0911 00:18:02.975270 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.976661 kubelet[2767]: E0911 00:18:02.975579 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.976661 kubelet[2767]: W0911 00:18:02.976154 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.977100 kubelet[2767]: E0911 00:18:02.976888 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.977100 kubelet[2767]: E0911 00:18:02.977021 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.977100 kubelet[2767]: W0911 00:18:02.977031 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.978292 kubelet[2767]: E0911 00:18:02.977820 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.978292 kubelet[2767]: E0911 00:18:02.977996 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.978292 kubelet[2767]: W0911 00:18:02.978007 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.978292 kubelet[2767]: E0911 00:18:02.978133 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.979409 kubelet[2767]: E0911 00:18:02.979282 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.979409 kubelet[2767]: W0911 00:18:02.979301 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.979409 kubelet[2767]: E0911 00:18:02.979329 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.979877 kubelet[2767]: E0911 00:18:02.979840 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.979877 kubelet[2767]: W0911 00:18:02.979863 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.980108 kubelet[2767]: E0911 00:18:02.980061 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.980870 kubelet[2767]: E0911 00:18:02.980850 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.980870 kubelet[2767]: W0911 00:18:02.980865 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.980964 kubelet[2767]: E0911 00:18:02.980886 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:02.982081 kubelet[2767]: E0911 00:18:02.982053 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:18:02.982081 kubelet[2767]: W0911 00:18:02.982074 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:18:02.982181 kubelet[2767]: E0911 00:18:02.982087 2767 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:18:03.055389 containerd[1588]: time="2025-09-11T00:18:03.055307159Z" level=info msg="Container 6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:03.446124 containerd[1588]: time="2025-09-11T00:18:03.446026414Z" level=info msg="CreateContainer within sandbox \"619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741\"" Sep 11 00:18:03.450180 containerd[1588]: time="2025-09-11T00:18:03.448676645Z" level=info msg="StartContainer for \"6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741\"" Sep 11 00:18:03.450596 containerd[1588]: time="2025-09-11T00:18:03.450559854Z" level=info msg="connecting to shim 6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741" address="unix:///run/containerd/s/9bc551c33c828fe127470dbd8406362da4bc12c97b7f400493a5434b8501b439" protocol=ttrpc version=3 Sep 11 00:18:03.473772 systemd[1]: Started cri-containerd-6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741.scope - libcontainer container 6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741. Sep 11 00:18:03.536349 systemd[1]: cri-containerd-6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741.scope: Deactivated successfully. Sep 11 00:18:03.536780 systemd[1]: cri-containerd-6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741.scope: Consumed 45ms CPU time, 6.5M memory peak, 2.7M written to disk. Sep 11 00:18:03.541159 containerd[1588]: time="2025-09-11T00:18:03.541105800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741\" id:\"6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741\" pid:3437 exited_at:{seconds:1757549883 nanos:540388945}" Sep 11 00:18:03.611830 containerd[1588]: time="2025-09-11T00:18:03.611741058Z" level=info msg="received exit event container_id:\"6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741\" id:\"6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741\" pid:3437 exited_at:{seconds:1757549883 nanos:540388945}" Sep 11 00:18:03.625104 containerd[1588]: time="2025-09-11T00:18:03.625028864Z" level=info msg="StartContainer for \"6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741\" returns successfully" Sep 11 00:18:03.640921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d85b7b03c8f453acb77096a4b03f8c5034fd6afe02c3284f72cb66167fea741-rootfs.mount: Deactivated successfully. Sep 11 00:18:03.856938 kubelet[2767]: E0911 00:18:03.856846 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:04.273441 kubelet[2767]: I0911 00:18:04.273349 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58dcfbc5f9-lpnrj" podStartSLOduration=4.157814154 podStartE2EDuration="9.273323713s" podCreationTimestamp="2025-09-11 00:17:55 +0000 UTC" firstStartedPulling="2025-09-11 00:17:55.977242034 +0000 UTC m=+27.225487620" lastFinishedPulling="2025-09-11 00:18:01.092751594 +0000 UTC m=+32.340997179" observedRunningTime="2025-09-11 00:18:01.968762119 +0000 UTC m=+33.217007724" watchObservedRunningTime="2025-09-11 00:18:04.273323713 +0000 UTC m=+35.521569308" Sep 11 00:18:04.960670 containerd[1588]: time="2025-09-11T00:18:04.960608413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:18:05.183443 kubelet[2767]: I0911 00:18:05.183359 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:18:05.183926 kubelet[2767]: E0911 00:18:05.183883 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:05.856265 kubelet[2767]: E0911 00:18:05.856161 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:05.962768 kubelet[2767]: E0911 00:18:05.962661 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:07.856226 kubelet[2767]: E0911 00:18:07.856104 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:09.856146 kubelet[2767]: E0911 00:18:09.856052 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:10.747676 containerd[1588]: time="2025-09-11T00:18:10.747587953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:10.749311 containerd[1588]: time="2025-09-11T00:18:10.749254653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:18:10.751032 containerd[1588]: time="2025-09-11T00:18:10.750969242Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:10.754451 containerd[1588]: time="2025-09-11T00:18:10.754358156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:10.755149 containerd[1588]: time="2025-09-11T00:18:10.755054052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.794402204s" Sep 11 00:18:10.755149 containerd[1588]: time="2025-09-11T00:18:10.755104245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:18:10.757680 containerd[1588]: time="2025-09-11T00:18:10.757649300Z" level=info msg="CreateContainer within sandbox \"619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:18:10.780190 containerd[1588]: time="2025-09-11T00:18:10.780111907Z" level=info msg="Container ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:10.800881 containerd[1588]: time="2025-09-11T00:18:10.800800176Z" level=info msg="CreateContainer within sandbox \"619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e\"" Sep 11 00:18:10.801714 containerd[1588]: time="2025-09-11T00:18:10.801645648Z" level=info msg="StartContainer for \"ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e\"" Sep 11 00:18:10.803793 containerd[1588]: time="2025-09-11T00:18:10.803761750Z" level=info msg="connecting to shim ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e" address="unix:///run/containerd/s/9bc551c33c828fe127470dbd8406362da4bc12c97b7f400493a5434b8501b439" protocol=ttrpc version=3 Sep 11 00:18:10.839739 systemd[1]: Started cri-containerd-ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e.scope - libcontainer container ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e. Sep 11 00:18:10.908655 containerd[1588]: time="2025-09-11T00:18:10.908518263Z" level=info msg="StartContainer for \"ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e\" returns successfully" Sep 11 00:18:11.856546 kubelet[2767]: E0911 00:18:11.856460 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:13.395417 containerd[1588]: time="2025-09-11T00:18:13.395356594Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:18:13.399024 systemd[1]: cri-containerd-ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e.scope: Deactivated successfully. Sep 11 00:18:13.399481 systemd[1]: cri-containerd-ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e.scope: Consumed 671ms CPU time, 179.9M memory peak, 3.3M read from disk, 171.3M written to disk. Sep 11 00:18:13.400146 containerd[1588]: time="2025-09-11T00:18:13.400093145Z" level=info msg="received exit event container_id:\"ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e\" id:\"ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e\" pid:3502 exited_at:{seconds:1757549893 nanos:399870341}" Sep 11 00:18:13.400315 containerd[1588]: time="2025-09-11T00:18:13.400247532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e\" id:\"ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e\" pid:3502 exited_at:{seconds:1757549893 nanos:399870341}" Sep 11 00:18:13.425182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba33bc427e86bb7cb2800a64847b38bffee068a8e09aa2d4ccaa39fe92f1e96e-rootfs.mount: Deactivated successfully. Sep 11 00:18:13.441572 kubelet[2767]: I0911 00:18:13.441490 2767 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:18:13.842928 systemd[1]: Created slice kubepods-besteffort-pod5e337bb8_db98_459c_b699_c0285320a54b.slice - libcontainer container kubepods-besteffort-pod5e337bb8_db98_459c_b699_c0285320a54b.slice. Sep 11 00:18:13.844941 kubelet[2767]: I0911 00:18:13.844902 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9w46\" (UniqueName: \"kubernetes.io/projected/5e337bb8-db98-459c-b699-c0285320a54b-kube-api-access-w9w46\") pod \"calico-kube-controllers-67bb4d5dcc-mw62c\" (UID: \"5e337bb8-db98-459c-b699-c0285320a54b\") " pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" Sep 11 00:18:13.845053 kubelet[2767]: I0911 00:18:13.844964 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxlt\" (UniqueName: \"kubernetes.io/projected/8f7192d3-605e-4e3c-a2f9-5b90023f4ae4-kube-api-access-nmxlt\") pod \"calico-apiserver-5b8f7cbc4f-5sb9t\" (UID: \"8f7192d3-605e-4e3c-a2f9-5b90023f4ae4\") " pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" Sep 11 00:18:13.845053 kubelet[2767]: I0911 00:18:13.845008 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e337bb8-db98-459c-b699-c0285320a54b-tigera-ca-bundle\") pod \"calico-kube-controllers-67bb4d5dcc-mw62c\" (UID: \"5e337bb8-db98-459c-b699-c0285320a54b\") " pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" Sep 11 00:18:13.845053 kubelet[2767]: I0911 00:18:13.845037 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jhp\" (UniqueName: \"kubernetes.io/projected/22399693-301e-43a3-8c1f-eeee2d55855d-kube-api-access-d5jhp\") pod \"coredns-668d6bf9bc-b9mtm\" (UID: \"22399693-301e-43a3-8c1f-eeee2d55855d\") " pod="kube-system/coredns-668d6bf9bc-b9mtm" Sep 11 00:18:13.845179 kubelet[2767]: I0911 00:18:13.845064 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f7192d3-605e-4e3c-a2f9-5b90023f4ae4-calico-apiserver-certs\") pod \"calico-apiserver-5b8f7cbc4f-5sb9t\" (UID: \"8f7192d3-605e-4e3c-a2f9-5b90023f4ae4\") " pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" Sep 11 00:18:13.845179 kubelet[2767]: I0911 00:18:13.845091 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22399693-301e-43a3-8c1f-eeee2d55855d-config-volume\") pod \"coredns-668d6bf9bc-b9mtm\" (UID: \"22399693-301e-43a3-8c1f-eeee2d55855d\") " pod="kube-system/coredns-668d6bf9bc-b9mtm" Sep 11 00:18:13.853787 systemd[1]: Created slice kubepods-burstable-pod22399693_301e_43a3_8c1f_eeee2d55855d.slice - libcontainer container kubepods-burstable-pod22399693_301e_43a3_8c1f_eeee2d55855d.slice. Sep 11 00:18:13.860021 systemd[1]: Created slice kubepods-besteffort-pod8f7192d3_605e_4e3c_a2f9_5b90023f4ae4.slice - libcontainer container kubepods-besteffort-pod8f7192d3_605e_4e3c_a2f9_5b90023f4ae4.slice. Sep 11 00:18:13.865310 systemd[1]: Created slice kubepods-besteffort-pod07f1b42d_4dc6_49ad_a6fd_da6e1c525c50.slice - libcontainer container kubepods-besteffort-pod07f1b42d_4dc6_49ad_a6fd_da6e1c525c50.slice. Sep 11 00:18:13.871217 systemd[1]: Created slice kubepods-burstable-pod6aca8619_dda4_4f4b_a436_b0814a53e402.slice - libcontainer container kubepods-burstable-pod6aca8619_dda4_4f4b_a436_b0814a53e402.slice. Sep 11 00:18:13.903071 systemd[1]: Created slice kubepods-besteffort-poda958eb4a_d79a_4536_a408_4a04f34cc149.slice - libcontainer container kubepods-besteffort-poda958eb4a_d79a_4536_a408_4a04f34cc149.slice. Sep 11 00:18:13.908905 systemd[1]: Created slice kubepods-besteffort-podca2242db_da46_444a_bbbe_7b328e153a3e.slice - libcontainer container kubepods-besteffort-podca2242db_da46_444a_bbbe_7b328e153a3e.slice. Sep 11 00:18:13.913733 systemd[1]: Created slice kubepods-besteffort-podcc809b78_c1d4_448d_9695_d5c095a31b8f.slice - libcontainer container kubepods-besteffort-podcc809b78_c1d4_448d_9695_d5c095a31b8f.slice. Sep 11 00:18:13.916187 containerd[1588]: time="2025-09-11T00:18:13.916136462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l8vjl,Uid:cc809b78-c1d4-448d-9695-d5c095a31b8f,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:13.946457 kubelet[2767]: I0911 00:18:13.946371 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6aca8619-dda4-4f4b-a436-b0814a53e402-config-volume\") pod \"coredns-668d6bf9bc-7vw9f\" (UID: \"6aca8619-dda4-4f4b-a436-b0814a53e402\") " pod="kube-system/coredns-668d6bf9bc-7vw9f" Sep 11 00:18:13.946457 kubelet[2767]: I0911 00:18:13.946441 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdk8j\" (UniqueName: \"kubernetes.io/projected/07f1b42d-4dc6-49ad-a6fd-da6e1c525c50-kube-api-access-fdk8j\") pod \"calico-apiserver-5b8f7cbc4f-pnfv8\" (UID: \"07f1b42d-4dc6-49ad-a6fd-da6e1c525c50\") " pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" Sep 11 00:18:13.946675 kubelet[2767]: I0911 00:18:13.946480 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a958eb4a-d79a-4536-a408-4a04f34cc149-goldmane-key-pair\") pod \"goldmane-54d579b49d-2sl89\" (UID: \"a958eb4a-d79a-4536-a408-4a04f34cc149\") " pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:13.946675 kubelet[2767]: I0911 00:18:13.946510 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-backend-key-pair\") pod \"whisker-766fc79f98-2hsfv\" (UID: \"ca2242db-da46-444a-bbbe-7b328e153a3e\") " pod="calico-system/whisker-766fc79f98-2hsfv" Sep 11 00:18:13.946675 kubelet[2767]: I0911 00:18:13.946536 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4w86\" (UniqueName: \"kubernetes.io/projected/ca2242db-da46-444a-bbbe-7b328e153a3e-kube-api-access-p4w86\") pod \"whisker-766fc79f98-2hsfv\" (UID: \"ca2242db-da46-444a-bbbe-7b328e153a3e\") " pod="calico-system/whisker-766fc79f98-2hsfv" Sep 11 00:18:13.946675 kubelet[2767]: I0911 00:18:13.946557 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-ca-bundle\") pod \"whisker-766fc79f98-2hsfv\" (UID: \"ca2242db-da46-444a-bbbe-7b328e153a3e\") " pod="calico-system/whisker-766fc79f98-2hsfv" Sep 11 00:18:13.946675 kubelet[2767]: I0911 00:18:13.946634 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a958eb4a-d79a-4536-a408-4a04f34cc149-config\") pod \"goldmane-54d579b49d-2sl89\" (UID: \"a958eb4a-d79a-4536-a408-4a04f34cc149\") " pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:13.946793 kubelet[2767]: I0911 00:18:13.946659 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a958eb4a-d79a-4536-a408-4a04f34cc149-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-2sl89\" (UID: \"a958eb4a-d79a-4536-a408-4a04f34cc149\") " pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:13.946793 kubelet[2767]: I0911 00:18:13.946683 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglsk\" (UniqueName: \"kubernetes.io/projected/6aca8619-dda4-4f4b-a436-b0814a53e402-kube-api-access-vglsk\") pod \"coredns-668d6bf9bc-7vw9f\" (UID: \"6aca8619-dda4-4f4b-a436-b0814a53e402\") " pod="kube-system/coredns-668d6bf9bc-7vw9f" Sep 11 00:18:13.946793 kubelet[2767]: I0911 00:18:13.946706 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07f1b42d-4dc6-49ad-a6fd-da6e1c525c50-calico-apiserver-certs\") pod \"calico-apiserver-5b8f7cbc4f-pnfv8\" (UID: \"07f1b42d-4dc6-49ad-a6fd-da6e1c525c50\") " pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" Sep 11 00:18:13.946863 kubelet[2767]: I0911 00:18:13.946825 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqzkz\" (UniqueName: \"kubernetes.io/projected/a958eb4a-d79a-4536-a408-4a04f34cc149-kube-api-access-cqzkz\") pod \"goldmane-54d579b49d-2sl89\" (UID: \"a958eb4a-d79a-4536-a408-4a04f34cc149\") " pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:14.450417 containerd[1588]: time="2025-09-11T00:18:14.450363869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bb4d5dcc-mw62c,Uid:5e337bb8-db98-459c-b699-c0285320a54b,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:14.457639 kubelet[2767]: E0911 00:18:14.457609 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:14.458109 containerd[1588]: time="2025-09-11T00:18:14.458046468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9mtm,Uid:22399693-301e-43a3-8c1f-eeee2d55855d,Namespace:kube-system,Attempt:0,}" Sep 11 00:18:14.462788 containerd[1588]: time="2025-09-11T00:18:14.462754909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-5sb9t,Uid:8f7192d3-605e-4e3c-a2f9-5b90023f4ae4,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:18:14.468702 containerd[1588]: time="2025-09-11T00:18:14.468639075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-pnfv8,Uid:07f1b42d-4dc6-49ad-a6fd-da6e1c525c50,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:18:14.497286 kubelet[2767]: E0911 00:18:14.497188 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:14.497863 containerd[1588]: time="2025-09-11T00:18:14.497805853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vw9f,Uid:6aca8619-dda4-4f4b-a436-b0814a53e402,Namespace:kube-system,Attempt:0,}" Sep 11 00:18:14.506944 containerd[1588]: time="2025-09-11T00:18:14.506874900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2sl89,Uid:a958eb4a-d79a-4536-a408-4a04f34cc149,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:14.512707 containerd[1588]: time="2025-09-11T00:18:14.512513008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-766fc79f98-2hsfv,Uid:ca2242db-da46-444a-bbbe-7b328e153a3e,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:14.819563 containerd[1588]: time="2025-09-11T00:18:14.819475002Z" level=error msg="Failed to destroy network for sandbox \"7be54a9ec323fb5749038f535d426e7226946f70c9690ffe2199354c04dc8c59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.896047 containerd[1588]: time="2025-09-11T00:18:14.895957812Z" level=error msg="Failed to destroy network for sandbox \"03ef023e2eee22336b1706f6cdab15b84dc62ecbd2866050f8cadc30d8fcdf9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.944802 containerd[1588]: time="2025-09-11T00:18:14.944642035Z" level=error msg="Failed to destroy network for sandbox \"62bb9906e3f0128e70368fd8eb5c72d3d6c2a33fc6aabf8e2b22e66c818018d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.944802 containerd[1588]: time="2025-09-11T00:18:14.944730019Z" level=error msg="Failed to destroy network for sandbox \"02293e038c8db9969bf58464dca2956156a066d979990864fa44998be3da6145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.945498 containerd[1588]: time="2025-09-11T00:18:14.945474001Z" level=error msg="Failed to destroy network for sandbox \"31514ded8d9ddc1fcb6e0c8c693455bcfb6d069e4df9fffc63a83376530a4e3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.948446 containerd[1588]: time="2025-09-11T00:18:14.948300594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bb4d5dcc-mw62c,Uid:5e337bb8-db98-459c-b699-c0285320a54b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ef023e2eee22336b1706f6cdab15b84dc62ecbd2866050f8cadc30d8fcdf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.948446 containerd[1588]: time="2025-09-11T00:18:14.948324990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l8vjl,Uid:cc809b78-c1d4-448d-9695-d5c095a31b8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be54a9ec323fb5749038f535d426e7226946f70c9690ffe2199354c04dc8c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.952984 containerd[1588]: time="2025-09-11T00:18:14.952904762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-pnfv8,Uid:07f1b42d-4dc6-49ad-a6fd-da6e1c525c50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02293e038c8db9969bf58464dca2956156a066d979990864fa44998be3da6145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.959483 kubelet[2767]: E0911 00:18:14.959379 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02293e038c8db9969bf58464dca2956156a066d979990864fa44998be3da6145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.959483 kubelet[2767]: E0911 00:18:14.959465 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02293e038c8db9969bf58464dca2956156a066d979990864fa44998be3da6145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" Sep 11 00:18:14.959483 kubelet[2767]: E0911 00:18:14.959486 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02293e038c8db9969bf58464dca2956156a066d979990864fa44998be3da6145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" Sep 11 00:18:14.959805 kubelet[2767]: E0911 00:18:14.959527 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b8f7cbc4f-pnfv8_calico-apiserver(07f1b42d-4dc6-49ad-a6fd-da6e1c525c50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b8f7cbc4f-pnfv8_calico-apiserver(07f1b42d-4dc6-49ad-a6fd-da6e1c525c50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02293e038c8db9969bf58464dca2956156a066d979990864fa44998be3da6145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" podUID="07f1b42d-4dc6-49ad-a6fd-da6e1c525c50" Sep 11 00:18:14.960140 containerd[1588]: time="2025-09-11T00:18:14.960078224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-5sb9t,Uid:8f7192d3-605e-4e3c-a2f9-5b90023f4ae4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bb9906e3f0128e70368fd8eb5c72d3d6c2a33fc6aabf8e2b22e66c818018d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.960738 kubelet[2767]: E0911 00:18:14.960710 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bb9906e3f0128e70368fd8eb5c72d3d6c2a33fc6aabf8e2b22e66c818018d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.960814 kubelet[2767]: E0911 00:18:14.960746 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bb9906e3f0128e70368fd8eb5c72d3d6c2a33fc6aabf8e2b22e66c818018d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" Sep 11 00:18:14.960814 kubelet[2767]: E0911 00:18:14.960764 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bb9906e3f0128e70368fd8eb5c72d3d6c2a33fc6aabf8e2b22e66c818018d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" Sep 11 00:18:14.960814 kubelet[2767]: E0911 00:18:14.960789 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b8f7cbc4f-5sb9t_calico-apiserver(8f7192d3-605e-4e3c-a2f9-5b90023f4ae4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b8f7cbc4f-5sb9t_calico-apiserver(8f7192d3-605e-4e3c-a2f9-5b90023f4ae4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62bb9906e3f0128e70368fd8eb5c72d3d6c2a33fc6aabf8e2b22e66c818018d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" podUID="8f7192d3-605e-4e3c-a2f9-5b90023f4ae4" Sep 11 00:18:14.960983 kubelet[2767]: E0911 00:18:14.960826 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ef023e2eee22336b1706f6cdab15b84dc62ecbd2866050f8cadc30d8fcdf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.960983 kubelet[2767]: E0911 00:18:14.960842 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ef023e2eee22336b1706f6cdab15b84dc62ecbd2866050f8cadc30d8fcdf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" Sep 11 00:18:14.960983 kubelet[2767]: E0911 00:18:14.960855 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ef023e2eee22336b1706f6cdab15b84dc62ecbd2866050f8cadc30d8fcdf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" Sep 11 00:18:14.961160 kubelet[2767]: E0911 00:18:14.960936 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67bb4d5dcc-mw62c_calico-system(5e337bb8-db98-459c-b699-c0285320a54b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67bb4d5dcc-mw62c_calico-system(5e337bb8-db98-459c-b699-c0285320a54b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03ef023e2eee22336b1706f6cdab15b84dc62ecbd2866050f8cadc30d8fcdf9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" podUID="5e337bb8-db98-459c-b699-c0285320a54b" Sep 11 00:18:14.961160 kubelet[2767]: E0911 00:18:14.961084 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be54a9ec323fb5749038f535d426e7226946f70c9690ffe2199354c04dc8c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.961160 kubelet[2767]: E0911 00:18:14.961109 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be54a9ec323fb5749038f535d426e7226946f70c9690ffe2199354c04dc8c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:18:14.961386 kubelet[2767]: E0911 00:18:14.961121 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7be54a9ec323fb5749038f535d426e7226946f70c9690ffe2199354c04dc8c59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:18:14.961386 kubelet[2767]: E0911 00:18:14.961168 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l8vjl_calico-system(cc809b78-c1d4-448d-9695-d5c095a31b8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l8vjl_calico-system(cc809b78-c1d4-448d-9695-d5c095a31b8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7be54a9ec323fb5749038f535d426e7226946f70c9690ffe2199354c04dc8c59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:14.964397 containerd[1588]: time="2025-09-11T00:18:14.964324155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9mtm,Uid:22399693-301e-43a3-8c1f-eeee2d55855d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31514ded8d9ddc1fcb6e0c8c693455bcfb6d069e4df9fffc63a83376530a4e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.967978 kubelet[2767]: E0911 00:18:14.967927 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31514ded8d9ddc1fcb6e0c8c693455bcfb6d069e4df9fffc63a83376530a4e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.968122 kubelet[2767]: E0911 00:18:14.968001 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31514ded8d9ddc1fcb6e0c8c693455bcfb6d069e4df9fffc63a83376530a4e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b9mtm" Sep 11 00:18:14.968122 kubelet[2767]: E0911 00:18:14.968022 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31514ded8d9ddc1fcb6e0c8c693455bcfb6d069e4df9fffc63a83376530a4e3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b9mtm" Sep 11 00:18:14.968122 kubelet[2767]: E0911 00:18:14.968082 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b9mtm_kube-system(22399693-301e-43a3-8c1f-eeee2d55855d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b9mtm_kube-system(22399693-301e-43a3-8c1f-eeee2d55855d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31514ded8d9ddc1fcb6e0c8c693455bcfb6d069e4df9fffc63a83376530a4e3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b9mtm" podUID="22399693-301e-43a3-8c1f-eeee2d55855d" Sep 11 00:18:14.970132 containerd[1588]: time="2025-09-11T00:18:14.970085313Z" level=error msg="Failed to destroy network for sandbox \"fca394b9ff9ec796ea9a50a1cdec6bc87646889c56a5256243a6ae369e60d9b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.977156 containerd[1588]: time="2025-09-11T00:18:14.977086896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vw9f,Uid:6aca8619-dda4-4f4b-a436-b0814a53e402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca394b9ff9ec796ea9a50a1cdec6bc87646889c56a5256243a6ae369e60d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.977829 kubelet[2767]: E0911 00:18:14.977778 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca394b9ff9ec796ea9a50a1cdec6bc87646889c56a5256243a6ae369e60d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.977897 kubelet[2767]: E0911 00:18:14.977860 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca394b9ff9ec796ea9a50a1cdec6bc87646889c56a5256243a6ae369e60d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7vw9f" Sep 11 00:18:14.977986 kubelet[2767]: E0911 00:18:14.977902 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca394b9ff9ec796ea9a50a1cdec6bc87646889c56a5256243a6ae369e60d9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7vw9f" Sep 11 00:18:14.978041 kubelet[2767]: E0911 00:18:14.977971 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7vw9f_kube-system(6aca8619-dda4-4f4b-a436-b0814a53e402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7vw9f_kube-system(6aca8619-dda4-4f4b-a436-b0814a53e402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca394b9ff9ec796ea9a50a1cdec6bc87646889c56a5256243a6ae369e60d9b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7vw9f" podUID="6aca8619-dda4-4f4b-a436-b0814a53e402" Sep 11 00:18:14.985272 containerd[1588]: time="2025-09-11T00:18:14.985187021Z" level=error msg="Failed to destroy network for sandbox \"444ee848f759e0c28cbc9aab5915b6bd3053bc8659d10a647a931fd838ef36e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.989638 containerd[1588]: time="2025-09-11T00:18:14.989184682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2sl89,Uid:a958eb4a-d79a-4536-a408-4a04f34cc149,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"444ee848f759e0c28cbc9aab5915b6bd3053bc8659d10a647a931fd838ef36e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.989855 kubelet[2767]: E0911 00:18:14.989411 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444ee848f759e0c28cbc9aab5915b6bd3053bc8659d10a647a931fd838ef36e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:14.989855 kubelet[2767]: E0911 00:18:14.989485 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444ee848f759e0c28cbc9aab5915b6bd3053bc8659d10a647a931fd838ef36e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:14.989855 kubelet[2767]: E0911 00:18:14.989513 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"444ee848f759e0c28cbc9aab5915b6bd3053bc8659d10a647a931fd838ef36e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:14.989999 kubelet[2767]: E0911 00:18:14.989571 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-2sl89_calico-system(a958eb4a-d79a-4536-a408-4a04f34cc149)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-2sl89_calico-system(a958eb4a-d79a-4536-a408-4a04f34cc149)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"444ee848f759e0c28cbc9aab5915b6bd3053bc8659d10a647a931fd838ef36e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-2sl89" podUID="a958eb4a-d79a-4536-a408-4a04f34cc149" Sep 11 00:18:14.995774 containerd[1588]: time="2025-09-11T00:18:14.995630391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:18:15.001143 containerd[1588]: time="2025-09-11T00:18:15.001092162Z" level=error msg="Failed to destroy network for sandbox \"9075414ed70e5f9360fa8c2d982efd5d96c05b857a4d378d6229f0b4e7ba5a2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:15.039455 containerd[1588]: time="2025-09-11T00:18:15.039354105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-766fc79f98-2hsfv,Uid:ca2242db-da46-444a-bbbe-7b328e153a3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9075414ed70e5f9360fa8c2d982efd5d96c05b857a4d378d6229f0b4e7ba5a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:15.039792 kubelet[2767]: E0911 00:18:15.039716 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9075414ed70e5f9360fa8c2d982efd5d96c05b857a4d378d6229f0b4e7ba5a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:15.039846 kubelet[2767]: E0911 00:18:15.039804 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9075414ed70e5f9360fa8c2d982efd5d96c05b857a4d378d6229f0b4e7ba5a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-766fc79f98-2hsfv" Sep 11 00:18:15.039846 kubelet[2767]: E0911 00:18:15.039834 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9075414ed70e5f9360fa8c2d982efd5d96c05b857a4d378d6229f0b4e7ba5a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-766fc79f98-2hsfv" Sep 11 00:18:15.039953 kubelet[2767]: E0911 00:18:15.039895 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-766fc79f98-2hsfv_calico-system(ca2242db-da46-444a-bbbe-7b328e153a3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-766fc79f98-2hsfv_calico-system(ca2242db-da46-444a-bbbe-7b328e153a3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9075414ed70e5f9360fa8c2d982efd5d96c05b857a4d378d6229f0b4e7ba5a2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-766fc79f98-2hsfv" podUID="ca2242db-da46-444a-bbbe-7b328e153a3e" Sep 11 00:18:15.426383 systemd[1]: run-netns-cni\x2dc8de00f8\x2d343d\x2dee5f\x2df925\x2d0e19ab6f468b.mount: Deactivated successfully. Sep 11 00:18:15.426535 systemd[1]: run-netns-cni\x2d55b268c6\x2de060\x2d7c76\x2d5b7c\x2da5b29cc07fb5.mount: Deactivated successfully. Sep 11 00:18:15.426628 systemd[1]: run-netns-cni\x2d33e7ffbc\x2db637\x2d39d0\x2dbc26\x2d9f6691f78910.mount: Deactivated successfully. Sep 11 00:18:15.426721 systemd[1]: run-netns-cni\x2d4955f40c\x2d2d20\x2de349\x2d5e64\x2d0da1f2c48180.mount: Deactivated successfully. Sep 11 00:18:15.426821 systemd[1]: run-netns-cni\x2de1d2a23e\x2d7aa6\x2d20e3\x2de91d\x2d44f978d44437.mount: Deactivated successfully. Sep 11 00:18:25.414148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355048344.mount: Deactivated successfully. Sep 11 00:18:28.313227 kubelet[2767]: E0911 00:18:28.313110 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:28.332529 containerd[1588]: time="2025-09-11T00:18:28.314548333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vw9f,Uid:6aca8619-dda4-4f4b-a436-b0814a53e402,Namespace:kube-system,Attempt:0,}" Sep 11 00:18:28.332529 containerd[1588]: time="2025-09-11T00:18:28.315372558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-5sb9t,Uid:8f7192d3-605e-4e3c-a2f9-5b90023f4ae4,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:18:28.332529 containerd[1588]: time="2025-09-11T00:18:28.315503094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bb4d5dcc-mw62c,Uid:5e337bb8-db98-459c-b699-c0285320a54b,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:28.344601 containerd[1588]: time="2025-09-11T00:18:28.344139565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2sl89,Uid:a958eb4a-d79a-4536-a408-4a04f34cc149,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:28.345242 kubelet[2767]: E0911 00:18:28.344947 2767 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.489s" Sep 11 00:18:28.346312 containerd[1588]: time="2025-09-11T00:18:28.346128916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l8vjl,Uid:cc809b78-c1d4-448d-9695-d5c095a31b8f,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:28.346630 kubelet[2767]: E0911 00:18:28.346598 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:28.347232 containerd[1588]: time="2025-09-11T00:18:28.346960735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9mtm,Uid:22399693-301e-43a3-8c1f-eeee2d55855d,Namespace:kube-system,Attempt:0,}" Sep 11 00:18:28.409652 systemd[1]: Started sshd@7-10.0.0.70:22-10.0.0.1:46760.service - OpenSSH per-connection server daemon (10.0.0.1:46760). Sep 11 00:18:28.475054 containerd[1588]: time="2025-09-11T00:18:28.474659586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:28.566340 containerd[1588]: time="2025-09-11T00:18:28.565338785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:18:28.567409 containerd[1588]: time="2025-09-11T00:18:28.567064509Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:28.573044 sshd[3828]: Accepted publickey for core from 10.0.0.1 port 46760 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:28.576656 sshd-session[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:28.587668 systemd-logind[1564]: New session 8 of user core. Sep 11 00:18:28.595643 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:18:28.669225 containerd[1588]: time="2025-09-11T00:18:28.669089975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:28.670019 containerd[1588]: time="2025-09-11T00:18:28.669934307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 13.674213276s" Sep 11 00:18:28.670019 containerd[1588]: time="2025-09-11T00:18:28.670018246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:18:28.681097 containerd[1588]: time="2025-09-11T00:18:28.681037145Z" level=error msg="Failed to destroy network for sandbox \"961301986ec8834ebedfc4d499f9cc0a2c68f4de75cea4c944930f026e3f07a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.685116 containerd[1588]: time="2025-09-11T00:18:28.685062515Z" level=error msg="Failed to destroy network for sandbox \"4b1df9970edc0c89acdbb89bb7eac8ac9eed34b55091420dbc1303b675c709c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.693069 containerd[1588]: time="2025-09-11T00:18:28.693006275Z" level=error msg="Failed to destroy network for sandbox \"e6d15234f9f260ec91d10faf711a75e7ab9d46c1ceb543a3e106644374616e13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.705600 containerd[1588]: time="2025-09-11T00:18:28.705530403Z" level=error msg="Failed to destroy network for sandbox \"eb6f4d5ddf354aa9dfac81de71579af7059bd785b05965bd462ef1dd19eabe6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.707561 containerd[1588]: time="2025-09-11T00:18:28.707493164Z" level=error msg="Failed to destroy network for sandbox \"1a078a0f3673cf5654e666942ffed20dd3f3f5f6ffba2b4f175dc692bfee1301\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.716557 containerd[1588]: time="2025-09-11T00:18:28.716498196Z" level=error msg="Failed to destroy network for sandbox \"89278743be35949f1054f4c6655ef46300fd6b2fd374fe96180a0975f8dcf92c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.752003 containerd[1588]: time="2025-09-11T00:18:28.751897360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9mtm,Uid:22399693-301e-43a3-8c1f-eeee2d55855d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"961301986ec8834ebedfc4d499f9cc0a2c68f4de75cea4c944930f026e3f07a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.752261 kubelet[2767]: E0911 00:18:28.752182 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961301986ec8834ebedfc4d499f9cc0a2c68f4de75cea4c944930f026e3f07a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.752353 kubelet[2767]: E0911 00:18:28.752288 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961301986ec8834ebedfc4d499f9cc0a2c68f4de75cea4c944930f026e3f07a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b9mtm" Sep 11 00:18:28.752353 kubelet[2767]: E0911 00:18:28.752313 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961301986ec8834ebedfc4d499f9cc0a2c68f4de75cea4c944930f026e3f07a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-b9mtm" Sep 11 00:18:28.752436 kubelet[2767]: E0911 00:18:28.752362 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b9mtm_kube-system(22399693-301e-43a3-8c1f-eeee2d55855d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b9mtm_kube-system(22399693-301e-43a3-8c1f-eeee2d55855d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"961301986ec8834ebedfc4d499f9cc0a2c68f4de75cea4c944930f026e3f07a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-b9mtm" podUID="22399693-301e-43a3-8c1f-eeee2d55855d" Sep 11 00:18:28.766049 containerd[1588]: time="2025-09-11T00:18:28.765991348Z" level=info msg="CreateContainer within sandbox \"619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:18:28.796338 containerd[1588]: time="2025-09-11T00:18:28.796265819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l8vjl,Uid:cc809b78-c1d4-448d-9695-d5c095a31b8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1df9970edc0c89acdbb89bb7eac8ac9eed34b55091420dbc1303b675c709c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.796702 kubelet[2767]: E0911 00:18:28.796638 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1df9970edc0c89acdbb89bb7eac8ac9eed34b55091420dbc1303b675c709c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.796702 kubelet[2767]: E0911 00:18:28.796720 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1df9970edc0c89acdbb89bb7eac8ac9eed34b55091420dbc1303b675c709c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:18:28.796911 kubelet[2767]: E0911 00:18:28.796741 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1df9970edc0c89acdbb89bb7eac8ac9eed34b55091420dbc1303b675c709c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l8vjl" Sep 11 00:18:28.796911 kubelet[2767]: E0911 00:18:28.796797 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l8vjl_calico-system(cc809b78-c1d4-448d-9695-d5c095a31b8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l8vjl_calico-system(cc809b78-c1d4-448d-9695-d5c095a31b8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b1df9970edc0c89acdbb89bb7eac8ac9eed34b55091420dbc1303b675c709c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l8vjl" podUID="cc809b78-c1d4-448d-9695-d5c095a31b8f" Sep 11 00:18:28.801051 containerd[1588]: time="2025-09-11T00:18:28.800929844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vw9f,Uid:6aca8619-dda4-4f4b-a436-b0814a53e402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d15234f9f260ec91d10faf711a75e7ab9d46c1ceb543a3e106644374616e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.801596 kubelet[2767]: E0911 00:18:28.801501 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d15234f9f260ec91d10faf711a75e7ab9d46c1ceb543a3e106644374616e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.801596 kubelet[2767]: E0911 00:18:28.801562 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d15234f9f260ec91d10faf711a75e7ab9d46c1ceb543a3e106644374616e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7vw9f" Sep 11 00:18:28.801819 kubelet[2767]: E0911 00:18:28.801640 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6d15234f9f260ec91d10faf711a75e7ab9d46c1ceb543a3e106644374616e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7vw9f" Sep 11 00:18:28.801819 kubelet[2767]: E0911 00:18:28.801717 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7vw9f_kube-system(6aca8619-dda4-4f4b-a436-b0814a53e402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7vw9f_kube-system(6aca8619-dda4-4f4b-a436-b0814a53e402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6d15234f9f260ec91d10faf711a75e7ab9d46c1ceb543a3e106644374616e13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7vw9f" podUID="6aca8619-dda4-4f4b-a436-b0814a53e402" Sep 11 00:18:28.805716 containerd[1588]: time="2025-09-11T00:18:28.805648853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2sl89,Uid:a958eb4a-d79a-4536-a408-4a04f34cc149,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb6f4d5ddf354aa9dfac81de71579af7059bd785b05965bd462ef1dd19eabe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.806179 kubelet[2767]: E0911 00:18:28.806011 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb6f4d5ddf354aa9dfac81de71579af7059bd785b05965bd462ef1dd19eabe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.806527 kubelet[2767]: E0911 00:18:28.806253 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb6f4d5ddf354aa9dfac81de71579af7059bd785b05965bd462ef1dd19eabe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:28.806527 kubelet[2767]: E0911 00:18:28.806290 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb6f4d5ddf354aa9dfac81de71579af7059bd785b05965bd462ef1dd19eabe6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-2sl89" Sep 11 00:18:28.806722 kubelet[2767]: E0911 00:18:28.806451 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-2sl89_calico-system(a958eb4a-d79a-4536-a408-4a04f34cc149)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-2sl89_calico-system(a958eb4a-d79a-4536-a408-4a04f34cc149)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb6f4d5ddf354aa9dfac81de71579af7059bd785b05965bd462ef1dd19eabe6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-2sl89" podUID="a958eb4a-d79a-4536-a408-4a04f34cc149" Sep 11 00:18:28.823387 containerd[1588]: time="2025-09-11T00:18:28.823211051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-5sb9t,Uid:8f7192d3-605e-4e3c-a2f9-5b90023f4ae4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a078a0f3673cf5654e666942ffed20dd3f3f5f6ffba2b4f175dc692bfee1301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.823581 kubelet[2767]: E0911 00:18:28.823542 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a078a0f3673cf5654e666942ffed20dd3f3f5f6ffba2b4f175dc692bfee1301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.823657 kubelet[2767]: E0911 00:18:28.823625 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a078a0f3673cf5654e666942ffed20dd3f3f5f6ffba2b4f175dc692bfee1301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" Sep 11 00:18:28.823687 kubelet[2767]: E0911 00:18:28.823658 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a078a0f3673cf5654e666942ffed20dd3f3f5f6ffba2b4f175dc692bfee1301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" Sep 11 00:18:28.823764 kubelet[2767]: E0911 00:18:28.823715 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b8f7cbc4f-5sb9t_calico-apiserver(8f7192d3-605e-4e3c-a2f9-5b90023f4ae4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b8f7cbc4f-5sb9t_calico-apiserver(8f7192d3-605e-4e3c-a2f9-5b90023f4ae4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a078a0f3673cf5654e666942ffed20dd3f3f5f6ffba2b4f175dc692bfee1301\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" podUID="8f7192d3-605e-4e3c-a2f9-5b90023f4ae4" Sep 11 00:18:28.833342 containerd[1588]: time="2025-09-11T00:18:28.833163028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bb4d5dcc-mw62c,Uid:5e337bb8-db98-459c-b699-c0285320a54b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89278743be35949f1054f4c6655ef46300fd6b2fd374fe96180a0975f8dcf92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.833647 kubelet[2767]: E0911 00:18:28.833568 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89278743be35949f1054f4c6655ef46300fd6b2fd374fe96180a0975f8dcf92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.833875 kubelet[2767]: E0911 00:18:28.833838 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89278743be35949f1054f4c6655ef46300fd6b2fd374fe96180a0975f8dcf92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" Sep 11 00:18:28.833875 kubelet[2767]: E0911 00:18:28.833875 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89278743be35949f1054f4c6655ef46300fd6b2fd374fe96180a0975f8dcf92c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" Sep 11 00:18:28.834021 kubelet[2767]: E0911 00:18:28.833943 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67bb4d5dcc-mw62c_calico-system(5e337bb8-db98-459c-b699-c0285320a54b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67bb4d5dcc-mw62c_calico-system(5e337bb8-db98-459c-b699-c0285320a54b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89278743be35949f1054f4c6655ef46300fd6b2fd374fe96180a0975f8dcf92c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" podUID="5e337bb8-db98-459c-b699-c0285320a54b" Sep 11 00:18:28.858453 containerd[1588]: time="2025-09-11T00:18:28.858386263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-pnfv8,Uid:07f1b42d-4dc6-49ad-a6fd-da6e1c525c50,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:18:28.864424 containerd[1588]: time="2025-09-11T00:18:28.864236485Z" level=info msg="Container 361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:28.879349 sshd[3916]: Connection closed by 10.0.0.1 port 46760 Sep 11 00:18:28.877298 sshd-session[3828]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:28.884745 systemd[1]: sshd@7-10.0.0.70:22-10.0.0.1:46760.service: Deactivated successfully. Sep 11 00:18:28.888690 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:18:28.892492 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:18:28.896421 systemd-logind[1564]: Removed session 8. Sep 11 00:18:28.901435 containerd[1588]: time="2025-09-11T00:18:28.901400699Z" level=info msg="CreateContainer within sandbox \"619560ef22101692b0756eeef5adea3bba6a3b34cabe8a349f3062574e0536ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a\"" Sep 11 00:18:28.904505 containerd[1588]: time="2025-09-11T00:18:28.904333068Z" level=info msg="StartContainer for \"361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a\"" Sep 11 00:18:28.908911 containerd[1588]: time="2025-09-11T00:18:28.908881435Z" level=info msg="connecting to shim 361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a" address="unix:///run/containerd/s/9bc551c33c828fe127470dbd8406362da4bc12c97b7f400493a5434b8501b439" protocol=ttrpc version=3 Sep 11 00:18:28.946414 systemd[1]: Started cri-containerd-361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a.scope - libcontainer container 361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a. Sep 11 00:18:28.951557 containerd[1588]: time="2025-09-11T00:18:28.951485116Z" level=error msg="Failed to destroy network for sandbox \"76eb2f3b6a0738d8e4f2e9cea85065f4becdcce52f569489f541e367f59a0085\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.957443 containerd[1588]: time="2025-09-11T00:18:28.957301955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-pnfv8,Uid:07f1b42d-4dc6-49ad-a6fd-da6e1c525c50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76eb2f3b6a0738d8e4f2e9cea85065f4becdcce52f569489f541e367f59a0085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.958223 kubelet[2767]: E0911 00:18:28.958063 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76eb2f3b6a0738d8e4f2e9cea85065f4becdcce52f569489f541e367f59a0085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:18:28.958223 kubelet[2767]: E0911 00:18:28.958152 2767 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76eb2f3b6a0738d8e4f2e9cea85065f4becdcce52f569489f541e367f59a0085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" Sep 11 00:18:28.958467 kubelet[2767]: E0911 00:18:28.958394 2767 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76eb2f3b6a0738d8e4f2e9cea85065f4becdcce52f569489f541e367f59a0085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" Sep 11 00:18:28.958631 kubelet[2767]: E0911 00:18:28.958568 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b8f7cbc4f-pnfv8_calico-apiserver(07f1b42d-4dc6-49ad-a6fd-da6e1c525c50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b8f7cbc4f-pnfv8_calico-apiserver(07f1b42d-4dc6-49ad-a6fd-da6e1c525c50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76eb2f3b6a0738d8e4f2e9cea85065f4becdcce52f569489f541e367f59a0085\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" podUID="07f1b42d-4dc6-49ad-a6fd-da6e1c525c50" Sep 11 00:18:29.023527 containerd[1588]: time="2025-09-11T00:18:29.023458956Z" level=info msg="StartContainer for \"361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a\" returns successfully" Sep 11 00:18:29.120231 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:18:29.120379 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:18:29.368161 systemd[1]: run-netns-cni\x2d9be8fb1b\x2d6c52\x2d50b6\x2dd5c3\x2d1092df77bc25.mount: Deactivated successfully. Sep 11 00:18:29.368320 systemd[1]: run-netns-cni\x2dd1ca6f5c\x2de404\x2dccf9\x2da90c\x2df06bebd0da51.mount: Deactivated successfully. Sep 11 00:18:29.368420 systemd[1]: run-netns-cni\x2d5d4be316\x2d1624\x2d147f\x2d704c\x2d8f82dd27bc28.mount: Deactivated successfully. Sep 11 00:18:29.368524 systemd[1]: run-netns-cni\x2d14f1d275\x2dab30\x2d5998\x2d0946\x2d64a86177069c.mount: Deactivated successfully. Sep 11 00:18:29.368621 systemd[1]: run-netns-cni\x2d03aa835b\x2d7756\x2d0c1a\x2d5ca1\x2da31062d05871.mount: Deactivated successfully. Sep 11 00:18:29.368722 systemd[1]: run-netns-cni\x2dabc74d24\x2d1ceb\x2d93ba\x2d6ac4\x2debfbb6e34c53.mount: Deactivated successfully. Sep 11 00:18:29.376409 kubelet[2767]: I0911 00:18:29.374244 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gzm44" podStartSLOduration=1.710963634 podStartE2EDuration="34.374196917s" podCreationTimestamp="2025-09-11 00:17:55 +0000 UTC" firstStartedPulling="2025-09-11 00:17:56.015837252 +0000 UTC m=+27.264082838" lastFinishedPulling="2025-09-11 00:18:28.679070536 +0000 UTC m=+59.927316121" observedRunningTime="2025-09-11 00:18:29.372766207 +0000 UTC m=+60.621011812" watchObservedRunningTime="2025-09-11 00:18:29.374196917 +0000 UTC m=+60.622442513" Sep 11 00:18:29.422241 kubelet[2767]: I0911 00:18:29.422159 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-backend-key-pair\") pod \"ca2242db-da46-444a-bbbe-7b328e153a3e\" (UID: \"ca2242db-da46-444a-bbbe-7b328e153a3e\") " Sep 11 00:18:29.422438 kubelet[2767]: I0911 00:18:29.422350 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-ca-bundle\") pod \"ca2242db-da46-444a-bbbe-7b328e153a3e\" (UID: \"ca2242db-da46-444a-bbbe-7b328e153a3e\") " Sep 11 00:18:29.422438 kubelet[2767]: I0911 00:18:29.422388 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4w86\" (UniqueName: \"kubernetes.io/projected/ca2242db-da46-444a-bbbe-7b328e153a3e-kube-api-access-p4w86\") pod \"ca2242db-da46-444a-bbbe-7b328e153a3e\" (UID: \"ca2242db-da46-444a-bbbe-7b328e153a3e\") " Sep 11 00:18:29.423321 kubelet[2767]: I0911 00:18:29.423287 2767 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ca2242db-da46-444a-bbbe-7b328e153a3e" (UID: "ca2242db-da46-444a-bbbe-7b328e153a3e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:18:29.428249 kubelet[2767]: I0911 00:18:29.428070 2767 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ca2242db-da46-444a-bbbe-7b328e153a3e" (UID: "ca2242db-da46-444a-bbbe-7b328e153a3e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:18:29.429356 kubelet[2767]: I0911 00:18:29.429113 2767 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2242db-da46-444a-bbbe-7b328e153a3e-kube-api-access-p4w86" (OuterVolumeSpecName: "kube-api-access-p4w86") pod "ca2242db-da46-444a-bbbe-7b328e153a3e" (UID: "ca2242db-da46-444a-bbbe-7b328e153a3e"). InnerVolumeSpecName "kube-api-access-p4w86". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:18:29.431870 systemd[1]: var-lib-kubelet-pods-ca2242db\x2dda46\x2d444a\x2dbbbe\x2d7b328e153a3e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:18:29.436393 systemd[1]: var-lib-kubelet-pods-ca2242db\x2dda46\x2d444a\x2dbbbe\x2d7b328e153a3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4w86.mount: Deactivated successfully. Sep 11 00:18:29.522797 kubelet[2767]: I0911 00:18:29.522715 2767 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:29.522797 kubelet[2767]: I0911 00:18:29.522757 2767 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca2242db-da46-444a-bbbe-7b328e153a3e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:29.522797 kubelet[2767]: I0911 00:18:29.522768 2767 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p4w86\" (UniqueName: \"kubernetes.io/projected/ca2242db-da46-444a-bbbe-7b328e153a3e-kube-api-access-p4w86\") on node \"localhost\" DevicePath \"\"" Sep 11 00:18:29.571949 containerd[1588]: time="2025-09-11T00:18:29.571903976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a\" id:\"1bad33a9f85e8347d877f592650edacea523667a257d1ce203c5f093b8cae9d0\" pid:4146 exit_status:1 exited_at:{seconds:1757549909 nanos:571556941}" Sep 11 00:18:29.667469 systemd[1]: Removed slice kubepods-besteffort-podca2242db_da46_444a_bbbe_7b328e153a3e.slice - libcontainer container kubepods-besteffort-podca2242db_da46_444a_bbbe_7b328e153a3e.slice. Sep 11 00:18:29.749242 systemd[1]: Created slice kubepods-besteffort-podf455ab67_4dd1_40a7_99e7_1290b62cef60.slice - libcontainer container kubepods-besteffort-podf455ab67_4dd1_40a7_99e7_1290b62cef60.slice. Sep 11 00:18:29.826001 kubelet[2767]: I0911 00:18:29.825934 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f455ab67-4dd1-40a7-99e7-1290b62cef60-whisker-backend-key-pair\") pod \"whisker-6598d677c7-52gx5\" (UID: \"f455ab67-4dd1-40a7-99e7-1290b62cef60\") " pod="calico-system/whisker-6598d677c7-52gx5" Sep 11 00:18:29.826001 kubelet[2767]: I0911 00:18:29.825974 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f455ab67-4dd1-40a7-99e7-1290b62cef60-whisker-ca-bundle\") pod \"whisker-6598d677c7-52gx5\" (UID: \"f455ab67-4dd1-40a7-99e7-1290b62cef60\") " pod="calico-system/whisker-6598d677c7-52gx5" Sep 11 00:18:29.826001 kubelet[2767]: I0911 00:18:29.825993 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjsz6\" (UniqueName: \"kubernetes.io/projected/f455ab67-4dd1-40a7-99e7-1290b62cef60-kube-api-access-sjsz6\") pod \"whisker-6598d677c7-52gx5\" (UID: \"f455ab67-4dd1-40a7-99e7-1290b62cef60\") " pod="calico-system/whisker-6598d677c7-52gx5" Sep 11 00:18:30.354834 containerd[1588]: time="2025-09-11T00:18:30.354771089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6598d677c7-52gx5,Uid:f455ab67-4dd1-40a7-99e7-1290b62cef60,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:30.445879 containerd[1588]: time="2025-09-11T00:18:30.445816681Z" level=info msg="TaskExit event in podsandbox handler container_id:\"361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a\" id:\"51469ee3a0ac6aef6d3b2b57a08ab06bd2dd5706778a596631de0bb484140b84\" pid:4173 exit_status:1 exited_at:{seconds:1757549910 nanos:445459807}" Sep 11 00:18:30.859159 kubelet[2767]: I0911 00:18:30.859101 2767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2242db-da46-444a-bbbe-7b328e153a3e" path="/var/lib/kubelet/pods/ca2242db-da46-444a-bbbe-7b328e153a3e/volumes" Sep 11 00:18:31.083831 systemd-networkd[1496]: vxlan.calico: Link UP Sep 11 00:18:31.083841 systemd-networkd[1496]: vxlan.calico: Gained carrier Sep 11 00:18:32.133977 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL Sep 11 00:18:33.294294 systemd-networkd[1496]: cali1745c3b0579: Link UP Sep 11 00:18:33.295050 systemd-networkd[1496]: cali1745c3b0579: Gained carrier Sep 11 00:18:33.515026 containerd[1588]: 2025-09-11 00:18:30.501 [INFO][4187] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:18:33.515026 containerd[1588]: 2025-09-11 00:18:30.760 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6598d677c7--52gx5-eth0 whisker-6598d677c7- calico-system f455ab67-4dd1-40a7-99e7-1290b62cef60 983 0 2025-09-11 00:18:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6598d677c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6598d677c7-52gx5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1745c3b0579 [] [] }} ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-" Sep 11 00:18:33.515026 containerd[1588]: 2025-09-11 00:18:30.760 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-eth0" Sep 11 00:18:33.515026 containerd[1588]: 2025-09-11 00:18:32.376 [INFO][4330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" HandleID="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Workload="localhost-k8s-whisker--6598d677c7--52gx5-eth0" Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:32.380 [INFO][4330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" HandleID="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Workload="localhost-k8s-whisker--6598d677c7--52gx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000192550), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6598d677c7-52gx5", "timestamp":"2025-09-11 00:18:32.376968131 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:32.380 [INFO][4330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:32.380 [INFO][4330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:32.380 [INFO][4330] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:32.681 [INFO][4330] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" host="localhost" Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:33.088 [INFO][4330] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:33.092 [INFO][4330] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:33.094 [INFO][4330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:33.096 [INFO][4330] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:33.515770 containerd[1588]: 2025-09-11 00:18:33.096 [INFO][4330] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" host="localhost" Sep 11 00:18:33.516120 containerd[1588]: 2025-09-11 00:18:33.098 [INFO][4330] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b Sep 11 00:18:33.516120 containerd[1588]: 2025-09-11 00:18:33.158 [INFO][4330] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" host="localhost" Sep 11 00:18:33.516120 containerd[1588]: 2025-09-11 00:18:33.260 [INFO][4330] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" host="localhost" Sep 11 00:18:33.516120 containerd[1588]: 2025-09-11 00:18:33.260 [INFO][4330] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" host="localhost" Sep 11 00:18:33.516120 containerd[1588]: 2025-09-11 00:18:33.260 [INFO][4330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:33.516120 containerd[1588]: 2025-09-11 00:18:33.260 [INFO][4330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" HandleID="k8s-pod-network.663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Workload="localhost-k8s-whisker--6598d677c7--52gx5-eth0" Sep 11 00:18:33.516520 containerd[1588]: 2025-09-11 00:18:33.263 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6598d677c7--52gx5-eth0", GenerateName:"whisker-6598d677c7-", Namespace:"calico-system", SelfLink:"", UID:"f455ab67-4dd1-40a7-99e7-1290b62cef60", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6598d677c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6598d677c7-52gx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1745c3b0579", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:33.516520 containerd[1588]: 2025-09-11 00:18:33.264 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-eth0" Sep 11 00:18:33.516682 containerd[1588]: 2025-09-11 00:18:33.264 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1745c3b0579 ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-eth0" Sep 11 00:18:33.516682 containerd[1588]: 2025-09-11 00:18:33.296 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-eth0" Sep 11 00:18:33.516784 containerd[1588]: 2025-09-11 00:18:33.296 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6598d677c7--52gx5-eth0", GenerateName:"whisker-6598d677c7-", Namespace:"calico-system", SelfLink:"", UID:"f455ab67-4dd1-40a7-99e7-1290b62cef60", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 18, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6598d677c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b", Pod:"whisker-6598d677c7-52gx5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1745c3b0579", MAC:"2a:2f:a4:d3:14:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:33.516898 containerd[1588]: 2025-09-11 00:18:33.511 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" Namespace="calico-system" Pod="whisker-6598d677c7-52gx5" WorkloadEndpoint="localhost-k8s-whisker--6598d677c7--52gx5-eth0" Sep 11 00:18:33.890616 systemd[1]: Started sshd@8-10.0.0.70:22-10.0.0.1:51840.service - OpenSSH per-connection server daemon (10.0.0.1:51840). Sep 11 00:18:34.079761 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 51840 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:34.081814 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:34.087193 systemd-logind[1564]: New session 9 of user core. Sep 11 00:18:34.094400 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:18:34.281635 sshd[4426]: Connection closed by 10.0.0.1 port 51840 Sep 11 00:18:34.282054 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:34.288892 systemd[1]: sshd@8-10.0.0.70:22-10.0.0.1:51840.service: Deactivated successfully. Sep 11 00:18:34.291338 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:18:34.292346 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:18:34.295067 systemd-logind[1564]: Removed session 9. Sep 11 00:18:34.348748 containerd[1588]: time="2025-09-11T00:18:34.348689384Z" level=info msg="connecting to shim 663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b" address="unix:///run/containerd/s/5cb668875fee9f611fb9e6c967ebbab8d2737e5598577d12110dfb0169f0e914" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:34.451546 systemd[1]: Started cri-containerd-663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b.scope - libcontainer container 663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b. Sep 11 00:18:34.466734 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:34.507194 containerd[1588]: time="2025-09-11T00:18:34.507130562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6598d677c7-52gx5,Uid:f455ab67-4dd1-40a7-99e7-1290b62cef60,Namespace:calico-system,Attempt:0,} returns sandbox id \"663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b\"" Sep 11 00:18:34.515385 containerd[1588]: time="2025-09-11T00:18:34.515090459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:18:34.852503 systemd-networkd[1496]: cali1745c3b0579: Gained IPv6LL Sep 11 00:18:37.214182 containerd[1588]: time="2025-09-11T00:18:37.214070025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:37.218032 containerd[1588]: time="2025-09-11T00:18:37.217916346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:18:37.219537 containerd[1588]: time="2025-09-11T00:18:37.219464554Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:37.222627 containerd[1588]: time="2025-09-11T00:18:37.222586418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:37.223332 containerd[1588]: time="2025-09-11T00:18:37.223267951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.708140482s" Sep 11 00:18:37.223332 containerd[1588]: time="2025-09-11T00:18:37.223319200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:18:37.228747 containerd[1588]: time="2025-09-11T00:18:37.228699280Z" level=info msg="CreateContainer within sandbox \"663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:18:37.239295 containerd[1588]: time="2025-09-11T00:18:37.239231137Z" level=info msg="Container db05e94514d2b35d0399e66df938033e3ed67660c148ac958cf4f15ec1e6db77: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:37.251285 containerd[1588]: time="2025-09-11T00:18:37.251028938Z" level=info msg="CreateContainer within sandbox \"663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"db05e94514d2b35d0399e66df938033e3ed67660c148ac958cf4f15ec1e6db77\"" Sep 11 00:18:37.253707 containerd[1588]: time="2025-09-11T00:18:37.253662947Z" level=info msg="StartContainer for \"db05e94514d2b35d0399e66df938033e3ed67660c148ac958cf4f15ec1e6db77\"" Sep 11 00:18:37.255638 containerd[1588]: time="2025-09-11T00:18:37.255593190Z" level=info msg="connecting to shim db05e94514d2b35d0399e66df938033e3ed67660c148ac958cf4f15ec1e6db77" address="unix:///run/containerd/s/5cb668875fee9f611fb9e6c967ebbab8d2737e5598577d12110dfb0169f0e914" protocol=ttrpc version=3 Sep 11 00:18:37.279531 systemd[1]: Started cri-containerd-db05e94514d2b35d0399e66df938033e3ed67660c148ac958cf4f15ec1e6db77.scope - libcontainer container db05e94514d2b35d0399e66df938033e3ed67660c148ac958cf4f15ec1e6db77. Sep 11 00:18:37.469045 containerd[1588]: time="2025-09-11T00:18:37.468600723Z" level=info msg="StartContainer for \"db05e94514d2b35d0399e66df938033e3ed67660c148ac958cf4f15ec1e6db77\" returns successfully" Sep 11 00:18:37.471584 containerd[1588]: time="2025-09-11T00:18:37.471521918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:18:39.296293 systemd[1]: Started sshd@9-10.0.0.70:22-10.0.0.1:51844.service - OpenSSH per-connection server daemon (10.0.0.1:51844). Sep 11 00:18:39.405943 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 51844 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:39.409033 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:39.415827 systemd-logind[1564]: New session 10 of user core. Sep 11 00:18:39.423553 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:18:39.651579 sshd[4543]: Connection closed by 10.0.0.1 port 51844 Sep 11 00:18:39.651887 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:39.658665 systemd[1]: sshd@9-10.0.0.70:22-10.0.0.1:51844.service: Deactivated successfully. Sep 11 00:18:39.661858 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:18:39.663548 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:18:39.666309 systemd-logind[1564]: Removed session 10. Sep 11 00:18:39.765937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730238980.mount: Deactivated successfully. Sep 11 00:18:39.875458 containerd[1588]: time="2025-09-11T00:18:39.875392121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-5sb9t,Uid:8f7192d3-605e-4e3c-a2f9-5b90023f4ae4,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:18:40.170338 containerd[1588]: time="2025-09-11T00:18:40.170237041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:40.171985 containerd[1588]: time="2025-09-11T00:18:40.171933236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:18:40.175697 containerd[1588]: time="2025-09-11T00:18:40.175631316Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:40.184687 containerd[1588]: time="2025-09-11T00:18:40.184538049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:40.185099 containerd[1588]: time="2025-09-11T00:18:40.185011128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.713431622s" Sep 11 00:18:40.185099 containerd[1588]: time="2025-09-11T00:18:40.185069861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:18:40.188930 containerd[1588]: time="2025-09-11T00:18:40.188888690Z" level=info msg="CreateContainer within sandbox \"663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:18:40.190457 systemd-networkd[1496]: calic8fe1d0d8e9: Link UP Sep 11 00:18:40.191725 systemd-networkd[1496]: calic8fe1d0d8e9: Gained carrier Sep 11 00:18:40.213055 containerd[1588]: time="2025-09-11T00:18:40.212533463Z" level=info msg="Container 41e9511f076023e7174856cfb25f334a52620ab7371ff3af022efde009514509: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:40.218044 containerd[1588]: 2025-09-11 00:18:40.044 [INFO][4564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0 calico-apiserver-5b8f7cbc4f- calico-apiserver 8f7192d3-605e-4e3c-a2f9-5b90023f4ae4 862 0 2025-09-11 00:17:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b8f7cbc4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b8f7cbc4f-5sb9t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic8fe1d0d8e9 [] [] }} ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-" Sep 11 00:18:40.218044 containerd[1588]: 2025-09-11 00:18:40.044 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" Sep 11 00:18:40.218044 containerd[1588]: 2025-09-11 00:18:40.097 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" HandleID="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Workload="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.097 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" HandleID="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Workload="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b8f7cbc4f-5sb9t", "timestamp":"2025-09-11 00:18:40.097478236 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.097 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.097 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.097 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.138 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" host="localhost" Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.148 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.154 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.156 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.159 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:40.218470 containerd[1588]: 2025-09-11 00:18:40.159 [INFO][4579] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" host="localhost" Sep 11 00:18:40.218798 containerd[1588]: 2025-09-11 00:18:40.161 [INFO][4579] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d Sep 11 00:18:40.218798 containerd[1588]: 2025-09-11 00:18:40.169 [INFO][4579] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" host="localhost" Sep 11 00:18:40.218798 containerd[1588]: 2025-09-11 00:18:40.179 [INFO][4579] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" host="localhost" Sep 11 00:18:40.218798 containerd[1588]: 2025-09-11 00:18:40.179 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" host="localhost" Sep 11 00:18:40.218798 containerd[1588]: 2025-09-11 00:18:40.179 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:40.218798 containerd[1588]: 2025-09-11 00:18:40.179 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" HandleID="k8s-pod-network.5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Workload="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" Sep 11 00:18:40.219164 containerd[1588]: 2025-09-11 00:18:40.184 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0", GenerateName:"calico-apiserver-5b8f7cbc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f7192d3-605e-4e3c-a2f9-5b90023f4ae4", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8f7cbc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b8f7cbc4f-5sb9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic8fe1d0d8e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:40.219340 containerd[1588]: 2025-09-11 00:18:40.184 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" Sep 11 00:18:40.219340 containerd[1588]: 2025-09-11 00:18:40.184 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8fe1d0d8e9 ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" Sep 11 00:18:40.219340 containerd[1588]: 2025-09-11 00:18:40.194 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" Sep 11 00:18:40.219451 containerd[1588]: 2025-09-11 00:18:40.194 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0", GenerateName:"calico-apiserver-5b8f7cbc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f7192d3-605e-4e3c-a2f9-5b90023f4ae4", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8f7cbc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d", Pod:"calico-apiserver-5b8f7cbc4f-5sb9t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic8fe1d0d8e9", MAC:"3e:44:9b:c8:3b:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:40.219538 containerd[1588]: 2025-09-11 00:18:40.209 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-5sb9t" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--5sb9t-eth0" Sep 11 00:18:40.230084 containerd[1588]: time="2025-09-11T00:18:40.230018615Z" level=info msg="CreateContainer within sandbox \"663626ce226483357b7a67689529424c5ee0c4a324cef65b2364b8acd19ae96b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"41e9511f076023e7174856cfb25f334a52620ab7371ff3af022efde009514509\"" Sep 11 00:18:40.230715 containerd[1588]: time="2025-09-11T00:18:40.230672217Z" level=info msg="StartContainer for \"41e9511f076023e7174856cfb25f334a52620ab7371ff3af022efde009514509\"" Sep 11 00:18:40.232121 containerd[1588]: time="2025-09-11T00:18:40.232091055Z" level=info msg="connecting to shim 41e9511f076023e7174856cfb25f334a52620ab7371ff3af022efde009514509" address="unix:///run/containerd/s/5cb668875fee9f611fb9e6c967ebbab8d2737e5598577d12110dfb0169f0e914" protocol=ttrpc version=3 Sep 11 00:18:40.262493 systemd[1]: Started cri-containerd-41e9511f076023e7174856cfb25f334a52620ab7371ff3af022efde009514509.scope - libcontainer container 41e9511f076023e7174856cfb25f334a52620ab7371ff3af022efde009514509. Sep 11 00:18:40.269996 containerd[1588]: time="2025-09-11T00:18:40.269928118Z" level=info msg="connecting to shim 5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d" address="unix:///run/containerd/s/3924818ad34127e3076e9183841d74a87e1f928678b1ff81cfc14a04ac481cbd" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:40.305542 systemd[1]: Started cri-containerd-5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d.scope - libcontainer container 5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d. Sep 11 00:18:40.325347 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:40.338945 containerd[1588]: time="2025-09-11T00:18:40.338882211Z" level=info msg="StartContainer for \"41e9511f076023e7174856cfb25f334a52620ab7371ff3af022efde009514509\" returns successfully" Sep 11 00:18:40.378759 containerd[1588]: time="2025-09-11T00:18:40.378679452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-5sb9t,Uid:8f7192d3-605e-4e3c-a2f9-5b90023f4ae4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d\"" Sep 11 00:18:40.383283 containerd[1588]: time="2025-09-11T00:18:40.382361662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:18:40.534289 kubelet[2767]: I0911 00:18:40.534162 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6598d677c7-52gx5" podStartSLOduration=5.862267399 podStartE2EDuration="11.534141038s" podCreationTimestamp="2025-09-11 00:18:29 +0000 UTC" firstStartedPulling="2025-09-11 00:18:34.514680794 +0000 UTC m=+65.762926379" lastFinishedPulling="2025-09-11 00:18:40.186554433 +0000 UTC m=+71.434800018" observedRunningTime="2025-09-11 00:18:40.533811541 +0000 UTC m=+71.782057146" watchObservedRunningTime="2025-09-11 00:18:40.534141038 +0000 UTC m=+71.782386623" Sep 11 00:18:40.857348 kubelet[2767]: E0911 00:18:40.857029 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:40.858086 containerd[1588]: time="2025-09-11T00:18:40.858028466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9mtm,Uid:22399693-301e-43a3-8c1f-eeee2d55855d,Namespace:kube-system,Attempt:0,}" Sep 11 00:18:40.858924 containerd[1588]: time="2025-09-11T00:18:40.858879946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-pnfv8,Uid:07f1b42d-4dc6-49ad-a6fd-da6e1c525c50,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:18:40.997395 systemd-networkd[1496]: cali6b755f3e85b: Link UP Sep 11 00:18:40.998703 systemd-networkd[1496]: cali6b755f3e85b: Gained carrier Sep 11 00:18:41.019309 containerd[1588]: 2025-09-11 00:18:40.903 [INFO][4683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0 coredns-668d6bf9bc- kube-system 22399693-301e-43a3-8c1f-eeee2d55855d 861 0 2025-09-11 00:17:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-b9mtm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b755f3e85b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-" Sep 11 00:18:41.019309 containerd[1588]: 2025-09-11 00:18:40.903 [INFO][4683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" Sep 11 00:18:41.019309 containerd[1588]: 2025-09-11 00:18:40.944 [INFO][4710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" HandleID="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Workload="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.945 [INFO][4710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" HandleID="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Workload="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d92a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-b9mtm", "timestamp":"2025-09-11 00:18:40.944984563 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.945 [INFO][4710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.945 [INFO][4710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.945 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.965 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" host="localhost" Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.970 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.975 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.977 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.979 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:41.019940 containerd[1588]: 2025-09-11 00:18:40.979 [INFO][4710] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" host="localhost" Sep 11 00:18:41.020339 containerd[1588]: 2025-09-11 00:18:40.981 [INFO][4710] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0 Sep 11 00:18:41.020339 containerd[1588]: 2025-09-11 00:18:40.984 [INFO][4710] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" host="localhost" Sep 11 00:18:41.020339 containerd[1588]: 2025-09-11 00:18:40.991 [INFO][4710] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" host="localhost" Sep 11 00:18:41.020339 containerd[1588]: 2025-09-11 00:18:40.991 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" host="localhost" Sep 11 00:18:41.020339 containerd[1588]: 2025-09-11 00:18:40.991 [INFO][4710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:41.020339 containerd[1588]: 2025-09-11 00:18:40.991 [INFO][4710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" HandleID="k8s-pod-network.c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Workload="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" Sep 11 00:18:41.020541 containerd[1588]: 2025-09-11 00:18:40.994 [INFO][4683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22399693-301e-43a3-8c1f-eeee2d55855d", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-b9mtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b755f3e85b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:41.020651 containerd[1588]: 2025-09-11 00:18:40.994 [INFO][4683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" Sep 11 00:18:41.020651 containerd[1588]: 2025-09-11 00:18:40.994 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b755f3e85b ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" Sep 11 00:18:41.020651 containerd[1588]: 2025-09-11 00:18:40.999 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" Sep 11 00:18:41.020773 containerd[1588]: 2025-09-11 00:18:40.999 [INFO][4683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22399693-301e-43a3-8c1f-eeee2d55855d", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0", Pod:"coredns-668d6bf9bc-b9mtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b755f3e85b", MAC:"aa:c7:9c:b8:d2:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:41.020773 containerd[1588]: 2025-09-11 00:18:41.011 [INFO][4683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-b9mtm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--b9mtm-eth0" Sep 11 00:18:41.130138 containerd[1588]: time="2025-09-11T00:18:41.129987314Z" level=info msg="connecting to shim c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0" address="unix:///run/containerd/s/cd740e7de723b3e92486c99144b6ff870afeb9c311019ed1578e8f6883cfbf88" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:41.160582 systemd[1]: Started cri-containerd-c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0.scope - libcontainer container c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0. Sep 11 00:18:41.175920 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:41.218385 containerd[1588]: time="2025-09-11T00:18:41.218309891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9mtm,Uid:22399693-301e-43a3-8c1f-eeee2d55855d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0\"" Sep 11 00:18:41.219475 kubelet[2767]: E0911 00:18:41.219424 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:41.223066 containerd[1588]: time="2025-09-11T00:18:41.223018548Z" level=info msg="CreateContainer within sandbox \"c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:18:41.261768 systemd-networkd[1496]: calic712032882c: Link UP Sep 11 00:18:41.262086 systemd-networkd[1496]: calic712032882c: Gained carrier Sep 11 00:18:41.316447 systemd-networkd[1496]: calic8fe1d0d8e9: Gained IPv6LL Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:40.912 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0 calico-apiserver-5b8f7cbc4f- calico-apiserver 07f1b42d-4dc6-49ad-a6fd-da6e1c525c50 863 0 2025-09-11 00:17:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b8f7cbc4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b8f7cbc4f-pnfv8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic712032882c [] [] }} ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:40.912 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:40.949 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" HandleID="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Workload="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:40.949 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" HandleID="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Workload="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000514a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b8f7cbc4f-pnfv8", "timestamp":"2025-09-11 00:18:40.949576643 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:40.949 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:40.991 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:40.991 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.203 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.217 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.225 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.228 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.232 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.232 [INFO][4718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.234 [INFO][4718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594 Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.241 [INFO][4718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.252 [INFO][4718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.252 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" host="localhost" Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.252 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:41.391190 containerd[1588]: 2025-09-11 00:18:41.252 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" HandleID="k8s-pod-network.8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Workload="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" Sep 11 00:18:41.392069 containerd[1588]: 2025-09-11 00:18:41.256 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0", GenerateName:"calico-apiserver-5b8f7cbc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"07f1b42d-4dc6-49ad-a6fd-da6e1c525c50", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8f7cbc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b8f7cbc4f-pnfv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic712032882c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:41.392069 containerd[1588]: 2025-09-11 00:18:41.256 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" Sep 11 00:18:41.392069 containerd[1588]: 2025-09-11 00:18:41.256 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic712032882c ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" Sep 11 00:18:41.392069 containerd[1588]: 2025-09-11 00:18:41.262 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" Sep 11 00:18:41.392069 containerd[1588]: 2025-09-11 00:18:41.263 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0", GenerateName:"calico-apiserver-5b8f7cbc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"07f1b42d-4dc6-49ad-a6fd-da6e1c525c50", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b8f7cbc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594", Pod:"calico-apiserver-5b8f7cbc4f-pnfv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic712032882c", MAC:"12:f0:f2:b4:05:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:41.392069 containerd[1588]: 2025-09-11 00:18:41.386 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" Namespace="calico-apiserver" Pod="calico-apiserver-5b8f7cbc4f-pnfv8" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b8f7cbc4f--pnfv8-eth0" Sep 11 00:18:41.625649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923667290.mount: Deactivated successfully. Sep 11 00:18:41.628603 containerd[1588]: time="2025-09-11T00:18:41.628554022Z" level=info msg="Container 8d2cd09beab4c6fa000646b63236f69c7f2c3e5e0c29f03a52f227a2063f0a76: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:41.632276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305970206.mount: Deactivated successfully. Sep 11 00:18:41.856342 kubelet[2767]: E0911 00:18:41.856283 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:41.856965 kubelet[2767]: E0911 00:18:41.856405 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:41.857005 containerd[1588]: time="2025-09-11T00:18:41.856857058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vw9f,Uid:6aca8619-dda4-4f4b-a436-b0814a53e402,Namespace:kube-system,Attempt:0,}" Sep 11 00:18:42.228108 containerd[1588]: time="2025-09-11T00:18:42.227955803Z" level=info msg="CreateContainer within sandbox \"c0e2a06ac91e9cf0429ee7fed1e763f5ec4ffda1f254b328b4abb1ca8fbc25b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d2cd09beab4c6fa000646b63236f69c7f2c3e5e0c29f03a52f227a2063f0a76\"" Sep 11 00:18:42.228731 containerd[1588]: time="2025-09-11T00:18:42.228604338Z" level=info msg="StartContainer for \"8d2cd09beab4c6fa000646b63236f69c7f2c3e5e0c29f03a52f227a2063f0a76\"" Sep 11 00:18:42.229703 containerd[1588]: time="2025-09-11T00:18:42.229639088Z" level=info msg="connecting to shim 8d2cd09beab4c6fa000646b63236f69c7f2c3e5e0c29f03a52f227a2063f0a76" address="unix:///run/containerd/s/cd740e7de723b3e92486c99144b6ff870afeb9c311019ed1578e8f6883cfbf88" protocol=ttrpc version=3 Sep 11 00:18:42.254611 systemd[1]: Started cri-containerd-8d2cd09beab4c6fa000646b63236f69c7f2c3e5e0c29f03a52f227a2063f0a76.scope - libcontainer container 8d2cd09beab4c6fa000646b63236f69c7f2c3e5e0c29f03a52f227a2063f0a76. Sep 11 00:18:42.446250 containerd[1588]: time="2025-09-11T00:18:42.446183593Z" level=info msg="StartContainer for \"8d2cd09beab4c6fa000646b63236f69c7f2c3e5e0c29f03a52f227a2063f0a76\" returns successfully" Sep 11 00:18:42.527515 kubelet[2767]: E0911 00:18:42.527468 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:42.542914 kubelet[2767]: I0911 00:18:42.542846 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-b9mtm" podStartSLOduration=66.542823919 podStartE2EDuration="1m6.542823919s" podCreationTimestamp="2025-09-11 00:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:18:42.541693828 +0000 UTC m=+73.789939413" watchObservedRunningTime="2025-09-11 00:18:42.542823919 +0000 UTC m=+73.791069504" Sep 11 00:18:42.565230 containerd[1588]: time="2025-09-11T00:18:42.565130783Z" level=info msg="connecting to shim 8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594" address="unix:///run/containerd/s/29bf0daa30abcf9de87541a63fc2548c822cd87fa28dc36212905d9e039dcfd1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:42.604558 systemd[1]: Started cri-containerd-8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594.scope - libcontainer container 8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594. Sep 11 00:18:42.636182 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:42.668018 systemd-networkd[1496]: calib1f3daa6c69: Link UP Sep 11 00:18:42.668921 systemd-networkd[1496]: calib1f3daa6c69: Gained carrier Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.558 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0 coredns-668d6bf9bc- kube-system 6aca8619-dda4-4f4b-a436-b0814a53e402 854 0 2025-09-11 00:17:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7vw9f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1f3daa6c69 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.558 [INFO][4822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.603 [INFO][4857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" HandleID="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Workload="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.604 [INFO][4857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" HandleID="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Workload="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013b720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7vw9f", "timestamp":"2025-09-11 00:18:42.603900669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.604 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.604 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.604 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.614 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.622 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.630 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.633 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.636 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.636 [INFO][4857] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.639 [INFO][4857] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.647 [INFO][4857] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.654 [INFO][4857] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.655 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" host="localhost" Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.655 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:42.696174 containerd[1588]: 2025-09-11 00:18:42.655 [INFO][4857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" HandleID="k8s-pod-network.f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Workload="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" Sep 11 00:18:42.697101 containerd[1588]: 2025-09-11 00:18:42.662 [INFO][4822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6aca8619-dda4-4f4b-a436-b0814a53e402", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7vw9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1f3daa6c69", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:42.697101 containerd[1588]: 2025-09-11 00:18:42.662 [INFO][4822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" Sep 11 00:18:42.697101 containerd[1588]: 2025-09-11 00:18:42.662 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1f3daa6c69 ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" Sep 11 00:18:42.697101 containerd[1588]: 2025-09-11 00:18:42.669 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" Sep 11 00:18:42.697101 containerd[1588]: 2025-09-11 00:18:42.671 [INFO][4822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6aca8619-dda4-4f4b-a436-b0814a53e402", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb", Pod:"coredns-668d6bf9bc-7vw9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1f3daa6c69", MAC:"6a:4e:85:77:9f:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:42.697101 containerd[1588]: 2025-09-11 00:18:42.691 [INFO][4822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vw9f" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7vw9f-eth0" Sep 11 00:18:42.857553 containerd[1588]: time="2025-09-11T00:18:42.857255602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l8vjl,Uid:cc809b78-c1d4-448d-9695-d5c095a31b8f,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:42.858749 containerd[1588]: time="2025-09-11T00:18:42.858701795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bb4d5dcc-mw62c,Uid:5e337bb8-db98-459c-b699-c0285320a54b,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:43.044445 systemd-networkd[1496]: cali6b755f3e85b: Gained IPv6LL Sep 11 00:18:43.045350 systemd-networkd[1496]: calic712032882c: Gained IPv6LL Sep 11 00:18:43.082841 containerd[1588]: time="2025-09-11T00:18:43.082755482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b8f7cbc4f-pnfv8,Uid:07f1b42d-4dc6-49ad-a6fd-da6e1c525c50,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594\"" Sep 11 00:18:43.530536 kubelet[2767]: E0911 00:18:43.530493 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:43.679103 containerd[1588]: time="2025-09-11T00:18:43.678542790Z" level=info msg="connecting to shim f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb" address="unix:///run/containerd/s/3f99ba6858da2f27705435f0564c28210ec61e00d1f666ef6f2482f06db65632" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:43.734565 systemd[1]: Started cri-containerd-f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb.scope - libcontainer container f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb. Sep 11 00:18:43.757036 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:43.817398 systemd-networkd[1496]: cali2fbeaeb3a30: Link UP Sep 11 00:18:43.819045 systemd-networkd[1496]: cali2fbeaeb3a30: Gained carrier Sep 11 00:18:43.823569 containerd[1588]: time="2025-09-11T00:18:43.823489461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vw9f,Uid:6aca8619-dda4-4f4b-a436-b0814a53e402,Namespace:kube-system,Attempt:0,} returns sandbox id \"f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb\"" Sep 11 00:18:43.826460 kubelet[2767]: E0911 00:18:43.826425 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:43.832007 containerd[1588]: time="2025-09-11T00:18:43.831964730Z" level=info msg="CreateContainer within sandbox \"f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.654 [INFO][4908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--l8vjl-eth0 csi-node-driver- calico-system cc809b78-c1d4-448d-9695-d5c095a31b8f 718 0 2025-09-11 00:17:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-l8vjl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2fbeaeb3a30 [] [] }} ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.655 [INFO][4908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-eth0" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.728 [INFO][4950] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" HandleID="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Workload="localhost-k8s-csi--node--driver--l8vjl-eth0" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.729 [INFO][4950] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" HandleID="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Workload="localhost-k8s-csi--node--driver--l8vjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-l8vjl", "timestamp":"2025-09-11 00:18:43.728838861 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.729 [INFO][4950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.729 [INFO][4950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.729 [INFO][4950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.754 [INFO][4950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.767 [INFO][4950] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.776 [INFO][4950] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.779 [INFO][4950] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.783 [INFO][4950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.783 [INFO][4950] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.786 [INFO][4950] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861 Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.792 [INFO][4950] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.808 [INFO][4950] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.808 [INFO][4950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" host="localhost" Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.808 [INFO][4950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:43.850300 containerd[1588]: 2025-09-11 00:18:43.808 [INFO][4950] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" HandleID="k8s-pod-network.c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Workload="localhost-k8s-csi--node--driver--l8vjl-eth0" Sep 11 00:18:43.851100 containerd[1588]: 2025-09-11 00:18:43.812 [INFO][4908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l8vjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc809b78-c1d4-448d-9695-d5c095a31b8f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-l8vjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fbeaeb3a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:43.851100 containerd[1588]: 2025-09-11 00:18:43.812 [INFO][4908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-eth0" Sep 11 00:18:43.851100 containerd[1588]: 2025-09-11 00:18:43.812 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fbeaeb3a30 ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-eth0" Sep 11 00:18:43.851100 containerd[1588]: 2025-09-11 00:18:43.820 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-eth0" Sep 11 00:18:43.851100 containerd[1588]: 2025-09-11 00:18:43.821 [INFO][4908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l8vjl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc809b78-c1d4-448d-9695-d5c095a31b8f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861", Pod:"csi-node-driver-l8vjl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2fbeaeb3a30", MAC:"0a:49:e3:d6:15:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:43.851100 containerd[1588]: 2025-09-11 00:18:43.840 [INFO][4908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" Namespace="calico-system" Pod="csi-node-driver-l8vjl" WorkloadEndpoint="localhost-k8s-csi--node--driver--l8vjl-eth0" Sep 11 00:18:43.857044 containerd[1588]: time="2025-09-11T00:18:43.856984061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2sl89,Uid:a958eb4a-d79a-4536-a408-4a04f34cc149,Namespace:calico-system,Attempt:0,}" Sep 11 00:18:43.862027 containerd[1588]: time="2025-09-11T00:18:43.861175823Z" level=info msg="Container 90fde1a1d4614b9b904d77aade27ab14a85daf0db21ac6d9ff39fc08629e953f: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:43.892294 containerd[1588]: time="2025-09-11T00:18:43.892150744Z" level=info msg="CreateContainer within sandbox \"f99163affe59349bcf1ba8bc0bcd9cd5cf2690fc6b82c01181c946d05a0f43bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90fde1a1d4614b9b904d77aade27ab14a85daf0db21ac6d9ff39fc08629e953f\"" Sep 11 00:18:43.893875 containerd[1588]: time="2025-09-11T00:18:43.893448956Z" level=info msg="StartContainer for \"90fde1a1d4614b9b904d77aade27ab14a85daf0db21ac6d9ff39fc08629e953f\"" Sep 11 00:18:43.894847 containerd[1588]: time="2025-09-11T00:18:43.894813284Z" level=info msg="connecting to shim 90fde1a1d4614b9b904d77aade27ab14a85daf0db21ac6d9ff39fc08629e953f" address="unix:///run/containerd/s/3f99ba6858da2f27705435f0564c28210ec61e00d1f666ef6f2482f06db65632" protocol=ttrpc version=3 Sep 11 00:18:43.918082 containerd[1588]: time="2025-09-11T00:18:43.918008292Z" level=info msg="connecting to shim c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861" address="unix:///run/containerd/s/d04631bc16891449003bfbee52d251cedf1f8b4edeae06945fd49d90ac15b210" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:43.938567 systemd[1]: Started cri-containerd-90fde1a1d4614b9b904d77aade27ab14a85daf0db21ac6d9ff39fc08629e953f.scope - libcontainer container 90fde1a1d4614b9b904d77aade27ab14a85daf0db21ac6d9ff39fc08629e953f. Sep 11 00:18:43.948269 systemd-networkd[1496]: cali49bb710c4a5: Link UP Sep 11 00:18:43.948882 systemd-networkd[1496]: cali49bb710c4a5: Gained carrier Sep 11 00:18:43.984434 systemd[1]: Started cri-containerd-c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861.scope - libcontainer container c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861. Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.680 [INFO][4920] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0 calico-kube-controllers-67bb4d5dcc- calico-system 5e337bb8-db98-459c-b699-c0285320a54b 850 0 2025-09-11 00:17:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67bb4d5dcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67bb4d5dcc-mw62c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali49bb710c4a5 [] [] }} ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.681 [INFO][4920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.752 [INFO][4976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" HandleID="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Workload="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.752 [INFO][4976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" HandleID="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Workload="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67bb4d5dcc-mw62c", "timestamp":"2025-09-11 00:18:43.752366802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.752 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.808 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.809 [INFO][4976] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.856 [INFO][4976] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.868 [INFO][4976] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.877 [INFO][4976] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.881 [INFO][4976] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.888 [INFO][4976] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.888 [INFO][4976] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.892 [INFO][4976] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.902 [INFO][4976] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.919 [INFO][4976] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.920 [INFO][4976] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" host="localhost" Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.920 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:43.994640 containerd[1588]: 2025-09-11 00:18:43.920 [INFO][4976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" HandleID="k8s-pod-network.195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Workload="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" Sep 11 00:18:43.995269 containerd[1588]: 2025-09-11 00:18:43.937 [INFO][4920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0", GenerateName:"calico-kube-controllers-67bb4d5dcc-", Namespace:"calico-system", SelfLink:"", UID:"5e337bb8-db98-459c-b699-c0285320a54b", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67bb4d5dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67bb4d5dcc-mw62c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali49bb710c4a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:43.995269 containerd[1588]: 2025-09-11 00:18:43.938 [INFO][4920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" Sep 11 00:18:43.995269 containerd[1588]: 2025-09-11 00:18:43.938 [INFO][4920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49bb710c4a5 ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" Sep 11 00:18:43.995269 containerd[1588]: 2025-09-11 00:18:43.960 [INFO][4920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" Sep 11 00:18:43.995269 containerd[1588]: 2025-09-11 00:18:43.964 [INFO][4920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0", GenerateName:"calico-kube-controllers-67bb4d5dcc-", Namespace:"calico-system", SelfLink:"", UID:"5e337bb8-db98-459c-b699-c0285320a54b", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67bb4d5dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d", Pod:"calico-kube-controllers-67bb4d5dcc-mw62c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali49bb710c4a5", MAC:"aa:b3:0e:05:05:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:43.995269 containerd[1588]: 2025-09-11 00:18:43.982 [INFO][4920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" Namespace="calico-system" Pod="calico-kube-controllers-67bb4d5dcc-mw62c" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67bb4d5dcc--mw62c-eth0" Sep 11 00:18:44.004500 systemd-networkd[1496]: calib1f3daa6c69: Gained IPv6LL Sep 11 00:18:44.027086 containerd[1588]: time="2025-09-11T00:18:44.027018389Z" level=info msg="StartContainer for \"90fde1a1d4614b9b904d77aade27ab14a85daf0db21ac6d9ff39fc08629e953f\" returns successfully" Sep 11 00:18:44.041270 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:44.062998 containerd[1588]: time="2025-09-11T00:18:44.062615666Z" level=info msg="connecting to shim 195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d" address="unix:///run/containerd/s/af8420746a7500b4110ab5977b6e64dd51dff9f695f6ba657355543e99c1bf87" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:44.101440 systemd[1]: Started cri-containerd-195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d.scope - libcontainer container 195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d. Sep 11 00:18:44.122539 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:44.338988 systemd-networkd[1496]: caliaeaa8f713d6: Link UP Sep 11 00:18:44.339826 systemd-networkd[1496]: caliaeaa8f713d6: Gained carrier Sep 11 00:18:44.468979 containerd[1588]: time="2025-09-11T00:18:44.468907285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l8vjl,Uid:cc809b78-c1d4-448d-9695-d5c095a31b8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861\"" Sep 11 00:18:44.543027 kubelet[2767]: E0911 00:18:44.542968 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:44.543591 kubelet[2767]: E0911 00:18:44.543221 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:44.676405 systemd[1]: Started sshd@10-10.0.0.70:22-10.0.0.1:33384.service - OpenSSH per-connection server daemon (10.0.0.1:33384). Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:43.937 [INFO][5018] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--2sl89-eth0 goldmane-54d579b49d- calico-system a958eb4a-d79a-4536-a408-4a04f34cc149 859 0 2025-09-11 00:17:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-2sl89 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaeaa8f713d6 [] [] }} ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:43.938 [INFO][5018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.017 [INFO][5082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" HandleID="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Workload="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.018 [INFO][5082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" HandleID="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Workload="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-2sl89", "timestamp":"2025-09-11 00:18:44.017574758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.018 [INFO][5082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.018 [INFO][5082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.018 [INFO][5082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.034 [INFO][5082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.054 [INFO][5082] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.067 [INFO][5082] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.073 [INFO][5082] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.082 [INFO][5082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.082 [INFO][5082] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.084 [INFO][5082] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146 Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.163 [INFO][5082] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.331 [INFO][5082] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.331 [INFO][5082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" host="localhost" Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.331 [INFO][5082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:18:44.862241 containerd[1588]: 2025-09-11 00:18:44.331 [INFO][5082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" HandleID="k8s-pod-network.95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Workload="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" Sep 11 00:18:44.864308 containerd[1588]: 2025-09-11 00:18:44.335 [INFO][5018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--2sl89-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a958eb4a-d79a-4536-a408-4a04f34cc149", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-2sl89", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaeaa8f713d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:44.864308 containerd[1588]: 2025-09-11 00:18:44.335 [INFO][5018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" Sep 11 00:18:44.864308 containerd[1588]: 2025-09-11 00:18:44.335 [INFO][5018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeaa8f713d6 ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" Sep 11 00:18:44.864308 containerd[1588]: 2025-09-11 00:18:44.340 [INFO][5018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" Sep 11 00:18:44.864308 containerd[1588]: 2025-09-11 00:18:44.341 [INFO][5018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--2sl89-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a958eb4a-d79a-4536-a408-4a04f34cc149", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146", Pod:"goldmane-54d579b49d-2sl89", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaeaa8f713d6", MAC:"96:50:a0:ca:22:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:18:44.864308 containerd[1588]: 2025-09-11 00:18:44.857 [INFO][5018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" Namespace="calico-system" Pod="goldmane-54d579b49d-2sl89" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--2sl89-eth0" Sep 11 00:18:44.977346 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 33384 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:44.979511 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:44.984394 systemd-logind[1564]: New session 11 of user core. Sep 11 00:18:44.989361 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:18:45.106085 systemd-networkd[1496]: cali49bb710c4a5: Gained IPv6LL Sep 11 00:18:45.220459 systemd-networkd[1496]: cali2fbeaeb3a30: Gained IPv6LL Sep 11 00:18:45.363671 kubelet[2767]: I0911 00:18:45.363305 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7vw9f" podStartSLOduration=69.363285432 podStartE2EDuration="1m9.363285432s" podCreationTimestamp="2025-09-11 00:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:18:45.362538328 +0000 UTC m=+76.610783913" watchObservedRunningTime="2025-09-11 00:18:45.363285432 +0000 UTC m=+76.611531017" Sep 11 00:18:45.545712 kubelet[2767]: E0911 00:18:45.545660 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:45.545712 kubelet[2767]: E0911 00:18:45.545662 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:45.665964 containerd[1588]: time="2025-09-11T00:18:45.665911695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67bb4d5dcc-mw62c,Uid:5e337bb8-db98-459c-b699-c0285320a54b,Namespace:calico-system,Attempt:0,} returns sandbox id \"195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d\"" Sep 11 00:18:45.912145 sshd[5191]: Connection closed by 10.0.0.1 port 33384 Sep 11 00:18:45.913020 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:45.922679 systemd[1]: sshd@10-10.0.0.70:22-10.0.0.1:33384.service: Deactivated successfully. Sep 11 00:18:45.925995 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:18:45.927949 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:18:45.930243 systemd-logind[1564]: Removed session 11. Sep 11 00:18:45.948183 containerd[1588]: time="2025-09-11T00:18:45.948098389Z" level=info msg="connecting to shim 95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146" address="unix:///run/containerd/s/ec72decd6db26dee5b9621dfbe2ad1879390f58ac56694ec1ba594df3aa26a84" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:18:46.003433 systemd[1]: Started cri-containerd-95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146.scope - libcontainer container 95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146. Sep 11 00:18:46.031220 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:18:46.116590 systemd-networkd[1496]: caliaeaa8f713d6: Gained IPv6LL Sep 11 00:18:46.254821 containerd[1588]: time="2025-09-11T00:18:46.254746762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2sl89,Uid:a958eb4a-d79a-4536-a408-4a04f34cc149,Namespace:calico-system,Attempt:0,} returns sandbox id \"95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146\"" Sep 11 00:18:46.586261 kubelet[2767]: E0911 00:18:46.549226 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:46.877139 containerd[1588]: time="2025-09-11T00:18:46.876952058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:46.878491 containerd[1588]: time="2025-09-11T00:18:46.878413095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:18:46.879658 containerd[1588]: time="2025-09-11T00:18:46.879615988Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:46.884741 containerd[1588]: time="2025-09-11T00:18:46.883614673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:46.884741 containerd[1588]: time="2025-09-11T00:18:46.884582909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 6.502145694s" Sep 11 00:18:46.884741 containerd[1588]: time="2025-09-11T00:18:46.884630551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:18:46.887385 containerd[1588]: time="2025-09-11T00:18:46.887335368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:18:46.888002 containerd[1588]: time="2025-09-11T00:18:46.887970820Z" level=info msg="CreateContainer within sandbox \"5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:18:46.898755 containerd[1588]: time="2025-09-11T00:18:46.898677938Z" level=info msg="Container 6698974b501fbeef19bfc55a2f46d2617d7de695d00d670450d2c4588bc4b3f6: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:46.909117 containerd[1588]: time="2025-09-11T00:18:46.909054124Z" level=info msg="CreateContainer within sandbox \"5ee484ddcc2e30f0cceccf648bdd61d9cb575eb5899ca6b94326064920b3821d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6698974b501fbeef19bfc55a2f46d2617d7de695d00d670450d2c4588bc4b3f6\"" Sep 11 00:18:46.911238 containerd[1588]: time="2025-09-11T00:18:46.909883095Z" level=info msg="StartContainer for \"6698974b501fbeef19bfc55a2f46d2617d7de695d00d670450d2c4588bc4b3f6\"" Sep 11 00:18:46.911793 containerd[1588]: time="2025-09-11T00:18:46.911743332Z" level=info msg="connecting to shim 6698974b501fbeef19bfc55a2f46d2617d7de695d00d670450d2c4588bc4b3f6" address="unix:///run/containerd/s/3924818ad34127e3076e9183841d74a87e1f928678b1ff81cfc14a04ac481cbd" protocol=ttrpc version=3 Sep 11 00:18:46.945391 systemd[1]: Started cri-containerd-6698974b501fbeef19bfc55a2f46d2617d7de695d00d670450d2c4588bc4b3f6.scope - libcontainer container 6698974b501fbeef19bfc55a2f46d2617d7de695d00d670450d2c4588bc4b3f6. Sep 11 00:18:47.003332 containerd[1588]: time="2025-09-11T00:18:47.003279144Z" level=info msg="StartContainer for \"6698974b501fbeef19bfc55a2f46d2617d7de695d00d670450d2c4588bc4b3f6\" returns successfully" Sep 11 00:18:47.356110 containerd[1588]: time="2025-09-11T00:18:47.354774124Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:47.356495 containerd[1588]: time="2025-09-11T00:18:47.356445021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:18:47.359874 containerd[1588]: time="2025-09-11T00:18:47.359782989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 472.414949ms" Sep 11 00:18:47.359874 containerd[1588]: time="2025-09-11T00:18:47.359857651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:18:47.365458 containerd[1588]: time="2025-09-11T00:18:47.363766698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:18:47.367315 containerd[1588]: time="2025-09-11T00:18:47.367189619Z" level=info msg="CreateContainer within sandbox \"8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:18:47.383767 containerd[1588]: time="2025-09-11T00:18:47.383681442Z" level=info msg="Container bae3ac81c2b38afcb0ce50febfa8aa786c8116f458d70db722aa721be711492f: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:47.413324 containerd[1588]: time="2025-09-11T00:18:47.413222068Z" level=info msg="CreateContainer within sandbox \"8bca23e99e8a9f80599a2ce4b2618f0ef91f8c581869dd0d3861d78cdc46d594\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bae3ac81c2b38afcb0ce50febfa8aa786c8116f458d70db722aa721be711492f\"" Sep 11 00:18:47.414663 containerd[1588]: time="2025-09-11T00:18:47.414603824Z" level=info msg="StartContainer for \"bae3ac81c2b38afcb0ce50febfa8aa786c8116f458d70db722aa721be711492f\"" Sep 11 00:18:47.419647 containerd[1588]: time="2025-09-11T00:18:47.417398727Z" level=info msg="connecting to shim bae3ac81c2b38afcb0ce50febfa8aa786c8116f458d70db722aa721be711492f" address="unix:///run/containerd/s/29bf0daa30abcf9de87541a63fc2548c822cd87fa28dc36212905d9e039dcfd1" protocol=ttrpc version=3 Sep 11 00:18:47.480568 systemd[1]: Started cri-containerd-bae3ac81c2b38afcb0ce50febfa8aa786c8116f458d70db722aa721be711492f.scope - libcontainer container bae3ac81c2b38afcb0ce50febfa8aa786c8116f458d70db722aa721be711492f. Sep 11 00:18:47.595574 containerd[1588]: time="2025-09-11T00:18:47.594560940Z" level=info msg="StartContainer for \"bae3ac81c2b38afcb0ce50febfa8aa786c8116f458d70db722aa721be711492f\" returns successfully" Sep 11 00:18:48.616446 kubelet[2767]: I0911 00:18:48.614907 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-5sb9t" podStartSLOduration=50.110330604 podStartE2EDuration="56.614883012s" podCreationTimestamp="2025-09-11 00:17:52 +0000 UTC" firstStartedPulling="2025-09-11 00:18:40.381627136 +0000 UTC m=+71.629872721" lastFinishedPulling="2025-09-11 00:18:46.886179544 +0000 UTC m=+78.134425129" observedRunningTime="2025-09-11 00:18:47.602786552 +0000 UTC m=+78.851032157" watchObservedRunningTime="2025-09-11 00:18:48.614883012 +0000 UTC m=+79.863128597" Sep 11 00:18:48.988938 kubelet[2767]: I0911 00:18:48.987493 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b8f7cbc4f-pnfv8" podStartSLOduration=52.709018251 podStartE2EDuration="56.987387993s" podCreationTimestamp="2025-09-11 00:17:52 +0000 UTC" firstStartedPulling="2025-09-11 00:18:43.084026141 +0000 UTC m=+74.332271716" lastFinishedPulling="2025-09-11 00:18:47.362395853 +0000 UTC m=+78.610641458" observedRunningTime="2025-09-11 00:18:48.616524585 +0000 UTC m=+79.864770170" watchObservedRunningTime="2025-09-11 00:18:48.987387993 +0000 UTC m=+80.235633608" Sep 11 00:18:50.448143 containerd[1588]: time="2025-09-11T00:18:50.448054152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:50.449343 containerd[1588]: time="2025-09-11T00:18:50.449322645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:18:50.450700 containerd[1588]: time="2025-09-11T00:18:50.450662816Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:50.454449 containerd[1588]: time="2025-09-11T00:18:50.454346191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:50.455125 containerd[1588]: time="2025-09-11T00:18:50.455057319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.09123287s" Sep 11 00:18:50.455125 containerd[1588]: time="2025-09-11T00:18:50.455116432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:18:50.456391 containerd[1588]: time="2025-09-11T00:18:50.456332345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:18:50.457531 containerd[1588]: time="2025-09-11T00:18:50.457486149Z" level=info msg="CreateContainer within sandbox \"c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:18:50.474594 containerd[1588]: time="2025-09-11T00:18:50.474466614Z" level=info msg="Container d5705459239254a08d6b5fb5f53788fe7b12b7e0c00c745e3c8bf65a2c0cdb61: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:50.487937 containerd[1588]: time="2025-09-11T00:18:50.487866611Z" level=info msg="CreateContainer within sandbox \"c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d5705459239254a08d6b5fb5f53788fe7b12b7e0c00c745e3c8bf65a2c0cdb61\"" Sep 11 00:18:50.488684 containerd[1588]: time="2025-09-11T00:18:50.488649667Z" level=info msg="StartContainer for \"d5705459239254a08d6b5fb5f53788fe7b12b7e0c00c745e3c8bf65a2c0cdb61\"" Sep 11 00:18:50.490839 containerd[1588]: time="2025-09-11T00:18:50.490799994Z" level=info msg="connecting to shim d5705459239254a08d6b5fb5f53788fe7b12b7e0c00c745e3c8bf65a2c0cdb61" address="unix:///run/containerd/s/d04631bc16891449003bfbee52d251cedf1f8b4edeae06945fd49d90ac15b210" protocol=ttrpc version=3 Sep 11 00:18:50.531404 systemd[1]: Started cri-containerd-d5705459239254a08d6b5fb5f53788fe7b12b7e0c00c745e3c8bf65a2c0cdb61.scope - libcontainer container d5705459239254a08d6b5fb5f53788fe7b12b7e0c00c745e3c8bf65a2c0cdb61. Sep 11 00:18:50.848662 containerd[1588]: time="2025-09-11T00:18:50.848610596Z" level=info msg="StartContainer for \"d5705459239254a08d6b5fb5f53788fe7b12b7e0c00c745e3c8bf65a2c0cdb61\" returns successfully" Sep 11 00:18:50.937605 systemd[1]: Started sshd@11-10.0.0.70:22-10.0.0.1:55018.service - OpenSSH per-connection server daemon (10.0.0.1:55018). Sep 11 00:18:50.994739 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 55018 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:50.996748 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:51.001684 systemd-logind[1564]: New session 12 of user core. Sep 11 00:18:51.013377 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:18:51.189470 sshd[5400]: Connection closed by 10.0.0.1 port 55018 Sep 11 00:18:51.189749 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:51.205143 systemd[1]: sshd@11-10.0.0.70:22-10.0.0.1:55018.service: Deactivated successfully. Sep 11 00:18:51.209560 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:18:51.210859 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:18:51.215878 systemd[1]: Started sshd@12-10.0.0.70:22-10.0.0.1:55034.service - OpenSSH per-connection server daemon (10.0.0.1:55034). Sep 11 00:18:51.217253 systemd-logind[1564]: Removed session 12. Sep 11 00:18:51.279288 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 55034 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:51.345437 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:51.350635 systemd-logind[1564]: New session 13 of user core. Sep 11 00:18:51.361343 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:18:51.609698 sshd[5419]: Connection closed by 10.0.0.1 port 55034 Sep 11 00:18:51.612074 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:51.621146 systemd[1]: sshd@12-10.0.0.70:22-10.0.0.1:55034.service: Deactivated successfully. Sep 11 00:18:51.625517 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:18:51.627299 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:18:51.633193 systemd[1]: Started sshd@13-10.0.0.70:22-10.0.0.1:55036.service - OpenSSH per-connection server daemon (10.0.0.1:55036). Sep 11 00:18:51.635066 systemd-logind[1564]: Removed session 13. Sep 11 00:18:51.691789 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 55036 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:51.693854 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:51.699494 systemd-logind[1564]: New session 14 of user core. Sep 11 00:18:51.713543 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:18:51.852248 sshd[5433]: Connection closed by 10.0.0.1 port 55036 Sep 11 00:18:51.851947 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:51.857912 systemd[1]: sshd@13-10.0.0.70:22-10.0.0.1:55036.service: Deactivated successfully. Sep 11 00:18:51.860301 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:18:51.861533 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:18:51.863348 systemd-logind[1564]: Removed session 14. Sep 11 00:18:53.856966 kubelet[2767]: E0911 00:18:53.856926 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:18:53.962809 containerd[1588]: time="2025-09-11T00:18:53.962715099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:53.964319 containerd[1588]: time="2025-09-11T00:18:53.964261717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:18:53.975398 containerd[1588]: time="2025-09-11T00:18:53.975320445Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:53.978445 containerd[1588]: time="2025-09-11T00:18:53.978381429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:18:53.979149 containerd[1588]: time="2025-09-11T00:18:53.979117045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.522745226s" Sep 11 00:18:53.979258 containerd[1588]: time="2025-09-11T00:18:53.979154959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:18:53.980976 containerd[1588]: time="2025-09-11T00:18:53.980944510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:18:53.993165 containerd[1588]: time="2025-09-11T00:18:53.993096099Z" level=info msg="CreateContainer within sandbox \"195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:18:54.007881 containerd[1588]: time="2025-09-11T00:18:54.007707847Z" level=info msg="Container 4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:18:54.019660 containerd[1588]: time="2025-09-11T00:18:54.019590489Z" level=info msg="CreateContainer within sandbox \"195f353c12201121fcfc1a801565a6459a340762ae62cc03f88f7f9faf5ffa8d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96\"" Sep 11 00:18:54.020526 containerd[1588]: time="2025-09-11T00:18:54.020485160Z" level=info msg="StartContainer for \"4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96\"" Sep 11 00:18:54.022253 containerd[1588]: time="2025-09-11T00:18:54.022220480Z" level=info msg="connecting to shim 4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96" address="unix:///run/containerd/s/af8420746a7500b4110ab5977b6e64dd51dff9f695f6ba657355543e99c1bf87" protocol=ttrpc version=3 Sep 11 00:18:54.057499 systemd[1]: Started cri-containerd-4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96.scope - libcontainer container 4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96. Sep 11 00:18:54.932235 containerd[1588]: time="2025-09-11T00:18:54.932083251Z" level=info msg="StartContainer for \"4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96\" returns successfully" Sep 11 00:18:55.979403 containerd[1588]: time="2025-09-11T00:18:55.979354251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96\" id:\"f4b9ccade2aecff6eb11982e4e126523dc8116ec3db937b68893b01b1c64741c\" pid:5520 exited_at:{seconds:1757549935 nanos:979062483}" Sep 11 00:18:56.425927 kubelet[2767]: I0911 00:18:56.425735 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67bb4d5dcc-mw62c" podStartSLOduration=53.112861347 podStartE2EDuration="1m1.425665702s" podCreationTimestamp="2025-09-11 00:17:55 +0000 UTC" firstStartedPulling="2025-09-11 00:18:45.667705604 +0000 UTC m=+76.915951199" lastFinishedPulling="2025-09-11 00:18:53.980509969 +0000 UTC m=+85.228755554" observedRunningTime="2025-09-11 00:18:56.337168796 +0000 UTC m=+87.585414381" watchObservedRunningTime="2025-09-11 00:18:56.425665702 +0000 UTC m=+87.673911317" Sep 11 00:18:56.866483 systemd[1]: Started sshd@14-10.0.0.70:22-10.0.0.1:55040.service - OpenSSH per-connection server daemon (10.0.0.1:55040). Sep 11 00:18:56.951615 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 55040 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:18:56.953972 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:18:56.961165 systemd-logind[1564]: New session 15 of user core. Sep 11 00:18:56.968859 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:18:57.497045 sshd[5534]: Connection closed by 10.0.0.1 port 55040 Sep 11 00:18:57.497941 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Sep 11 00:18:57.504899 systemd[1]: sshd@14-10.0.0.70:22-10.0.0.1:55040.service: Deactivated successfully. Sep 11 00:18:57.507871 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:18:57.509635 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:18:57.512057 systemd-logind[1564]: Removed session 15. Sep 11 00:18:58.400966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296879433.mount: Deactivated successfully. Sep 11 00:19:00.533478 containerd[1588]: time="2025-09-11T00:19:00.533416404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:00.535031 containerd[1588]: time="2025-09-11T00:19:00.534995601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:19:00.536487 containerd[1588]: time="2025-09-11T00:19:00.536438788Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:00.539747 containerd[1588]: time="2025-09-11T00:19:00.539625496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:00.540264 containerd[1588]: time="2025-09-11T00:19:00.540229042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.559249094s" Sep 11 00:19:00.540325 containerd[1588]: time="2025-09-11T00:19:00.540265793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:19:00.541492 containerd[1588]: time="2025-09-11T00:19:00.541445303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:19:00.543867 containerd[1588]: time="2025-09-11T00:19:00.543835616Z" level=info msg="CreateContainer within sandbox \"95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:19:00.560231 containerd[1588]: time="2025-09-11T00:19:00.559407373Z" level=info msg="Container 62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:19:00.562872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056125180.mount: Deactivated successfully. Sep 11 00:19:00.570864 containerd[1588]: time="2025-09-11T00:19:00.570735645Z" level=info msg="CreateContainer within sandbox \"95dfd4de1b8bf683558cb92fe3294b6a83196eb7ca3b8e7d8ad4a3b089cf3146\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e\"" Sep 11 00:19:00.571937 containerd[1588]: time="2025-09-11T00:19:00.571887804Z" level=info msg="StartContainer for \"62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e\"" Sep 11 00:19:00.573582 containerd[1588]: time="2025-09-11T00:19:00.573527397Z" level=info msg="connecting to shim 62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e" address="unix:///run/containerd/s/ec72decd6db26dee5b9621dfbe2ad1879390f58ac56694ec1ba594df3aa26a84" protocol=ttrpc version=3 Sep 11 00:19:00.610230 containerd[1588]: time="2025-09-11T00:19:00.609566695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a\" id:\"0756266309e80b7c0557742f2133b05395bca9763b341920822d6dd6bca939f5\" pid:5572 exit_status:1 exited_at:{seconds:1757549940 nanos:609179633}" Sep 11 00:19:00.640741 systemd[1]: Started cri-containerd-62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e.scope - libcontainer container 62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e. Sep 11 00:19:00.856609 kubelet[2767]: E0911 00:19:00.856405 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:01.041419 containerd[1588]: time="2025-09-11T00:19:01.041353401Z" level=info msg="StartContainer for \"62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e\" returns successfully" Sep 11 00:19:02.129891 containerd[1588]: time="2025-09-11T00:19:02.129837698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e\" id:\"9c6bcfc3e72d65598a80a57c54e7265e064a744b041a16a99b7ebf1263232bb3\" pid:5633 exited_at:{seconds:1757549942 nanos:129418074}" Sep 11 00:19:02.181194 kubelet[2767]: I0911 00:19:02.181100 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-2sl89" podStartSLOduration=52.899498711 podStartE2EDuration="1m7.181081996s" podCreationTimestamp="2025-09-11 00:17:55 +0000 UTC" firstStartedPulling="2025-09-11 00:18:46.259636486 +0000 UTC m=+77.507882071" lastFinishedPulling="2025-09-11 00:19:00.541219771 +0000 UTC m=+91.789465356" observedRunningTime="2025-09-11 00:19:02.180509509 +0000 UTC m=+93.428755104" watchObservedRunningTime="2025-09-11 00:19:02.181081996 +0000 UTC m=+93.429327581" Sep 11 00:19:02.510473 systemd[1]: Started sshd@15-10.0.0.70:22-10.0.0.1:53956.service - OpenSSH per-connection server daemon (10.0.0.1:53956). Sep 11 00:19:02.600557 sshd[5650]: Accepted publickey for core from 10.0.0.1 port 53956 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:02.602453 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:02.606669 systemd-logind[1564]: New session 16 of user core. Sep 11 00:19:02.621332 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:19:02.892237 sshd[5653]: Connection closed by 10.0.0.1 port 53956 Sep 11 00:19:02.892616 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:02.897546 systemd[1]: sshd@15-10.0.0.70:22-10.0.0.1:53956.service: Deactivated successfully. Sep 11 00:19:02.899952 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:19:02.900915 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:19:02.902470 systemd-logind[1564]: Removed session 16. Sep 11 00:19:03.807150 containerd[1588]: time="2025-09-11T00:19:03.807078863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:03.812142 containerd[1588]: time="2025-09-11T00:19:03.812026796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:19:03.822941 containerd[1588]: time="2025-09-11T00:19:03.822841661Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:03.827380 containerd[1588]: time="2025-09-11T00:19:03.827309933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:19:03.828320 containerd[1588]: time="2025-09-11T00:19:03.828265787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.28677173s" Sep 11 00:19:03.828417 containerd[1588]: time="2025-09-11T00:19:03.828323839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:19:03.831897 containerd[1588]: time="2025-09-11T00:19:03.831810969Z" level=info msg="CreateContainer within sandbox \"c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:19:03.846213 containerd[1588]: time="2025-09-11T00:19:03.846126540Z" level=info msg="Container eac50c0cec3d80f7485e1eab8bc72394f10ea08d2f56b5ee78e182946de340a3: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:19:03.874846 containerd[1588]: time="2025-09-11T00:19:03.874767774Z" level=info msg="CreateContainer within sandbox \"c44a257ba26b161ba6dd881932cba8997c30282164a16c4234090ac22651f861\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"eac50c0cec3d80f7485e1eab8bc72394f10ea08d2f56b5ee78e182946de340a3\"" Sep 11 00:19:03.875655 containerd[1588]: time="2025-09-11T00:19:03.875575344Z" level=info msg="StartContainer for \"eac50c0cec3d80f7485e1eab8bc72394f10ea08d2f56b5ee78e182946de340a3\"" Sep 11 00:19:03.877856 containerd[1588]: time="2025-09-11T00:19:03.877811873Z" level=info msg="connecting to shim eac50c0cec3d80f7485e1eab8bc72394f10ea08d2f56b5ee78e182946de340a3" address="unix:///run/containerd/s/d04631bc16891449003bfbee52d251cedf1f8b4edeae06945fd49d90ac15b210" protocol=ttrpc version=3 Sep 11 00:19:03.911993 systemd[1]: Started cri-containerd-eac50c0cec3d80f7485e1eab8bc72394f10ea08d2f56b5ee78e182946de340a3.scope - libcontainer container eac50c0cec3d80f7485e1eab8bc72394f10ea08d2f56b5ee78e182946de340a3. Sep 11 00:19:04.012568 containerd[1588]: time="2025-09-11T00:19:04.012495178Z" level=info msg="StartContainer for \"eac50c0cec3d80f7485e1eab8bc72394f10ea08d2f56b5ee78e182946de340a3\" returns successfully" Sep 11 00:19:04.071445 kubelet[2767]: I0911 00:19:04.070192 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l8vjl" podStartSLOduration=49.710687549 podStartE2EDuration="1m9.070166941s" podCreationTimestamp="2025-09-11 00:17:55 +0000 UTC" firstStartedPulling="2025-09-11 00:18:44.470251386 +0000 UTC m=+75.718496971" lastFinishedPulling="2025-09-11 00:19:03.829730778 +0000 UTC m=+95.077976363" observedRunningTime="2025-09-11 00:19:04.069822691 +0000 UTC m=+95.318068286" watchObservedRunningTime="2025-09-11 00:19:04.070166941 +0000 UTC m=+95.318412526" Sep 11 00:19:04.339793 kubelet[2767]: I0911 00:19:04.339669 2767 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:19:04.339793 kubelet[2767]: I0911 00:19:04.339714 2767 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:19:04.860018 kubelet[2767]: E0911 00:19:04.859963 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:07.908133 systemd[1]: Started sshd@16-10.0.0.70:22-10.0.0.1:53970.service - OpenSSH per-connection server daemon (10.0.0.1:53970). Sep 11 00:19:08.000067 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 53970 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:08.002214 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:08.007834 systemd-logind[1564]: New session 17 of user core. Sep 11 00:19:08.018486 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:19:08.237591 sshd[5711]: Connection closed by 10.0.0.1 port 53970 Sep 11 00:19:08.238018 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:08.243702 systemd[1]: sshd@16-10.0.0.70:22-10.0.0.1:53970.service: Deactivated successfully. Sep 11 00:19:08.247027 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:19:08.248788 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:19:08.251283 systemd-logind[1564]: Removed session 17. Sep 11 00:19:13.252570 systemd[1]: Started sshd@17-10.0.0.70:22-10.0.0.1:49144.service - OpenSSH per-connection server daemon (10.0.0.1:49144). Sep 11 00:19:13.310073 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 49144 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:13.312367 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:13.317478 systemd-logind[1564]: New session 18 of user core. Sep 11 00:19:13.324503 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:19:13.460851 sshd[5735]: Connection closed by 10.0.0.1 port 49144 Sep 11 00:19:13.461298 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:13.467058 systemd[1]: sshd@17-10.0.0.70:22-10.0.0.1:49144.service: Deactivated successfully. Sep 11 00:19:13.469444 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:19:13.470277 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:19:13.471985 systemd-logind[1564]: Removed session 18. Sep 11 00:19:18.476106 systemd[1]: Started sshd@18-10.0.0.70:22-10.0.0.1:49152.service - OpenSSH per-connection server daemon (10.0.0.1:49152). Sep 11 00:19:18.532973 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 49152 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:18.535760 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:18.542142 systemd-logind[1564]: New session 19 of user core. Sep 11 00:19:18.551428 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:19:18.730598 sshd[5752]: Connection closed by 10.0.0.1 port 49152 Sep 11 00:19:18.732616 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:18.744443 systemd[1]: sshd@18-10.0.0.70:22-10.0.0.1:49152.service: Deactivated successfully. Sep 11 00:19:18.747182 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:19:18.748354 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:19:18.752746 systemd[1]: Started sshd@19-10.0.0.70:22-10.0.0.1:49162.service - OpenSSH per-connection server daemon (10.0.0.1:49162). Sep 11 00:19:18.753646 systemd-logind[1564]: Removed session 19. Sep 11 00:19:18.822414 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 49162 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:18.824284 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:18.830732 systemd-logind[1564]: New session 20 of user core. Sep 11 00:19:18.845884 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:19:19.197460 sshd[5768]: Connection closed by 10.0.0.1 port 49162 Sep 11 00:19:19.197704 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:19.205640 systemd[1]: sshd@19-10.0.0.70:22-10.0.0.1:49162.service: Deactivated successfully. Sep 11 00:19:19.208282 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:19:19.209725 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:19:19.213847 systemd[1]: Started sshd@20-10.0.0.70:22-10.0.0.1:49176.service - OpenSSH per-connection server daemon (10.0.0.1:49176). Sep 11 00:19:19.215459 systemd-logind[1564]: Removed session 20. Sep 11 00:19:19.290059 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 49176 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:19.293627 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:19.300275 systemd-logind[1564]: New session 21 of user core. Sep 11 00:19:19.312584 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:19:19.869504 sshd[5782]: Connection closed by 10.0.0.1 port 49176 Sep 11 00:19:19.870529 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:19.888455 systemd[1]: sshd@20-10.0.0.70:22-10.0.0.1:49176.service: Deactivated successfully. Sep 11 00:19:19.893122 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:19:19.897592 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:19:19.902005 systemd-logind[1564]: Removed session 21. Sep 11 00:19:19.905020 systemd[1]: Started sshd@21-10.0.0.70:22-10.0.0.1:49178.service - OpenSSH per-connection server daemon (10.0.0.1:49178). Sep 11 00:19:19.976616 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 49178 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:19.978726 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:19.983593 systemd-logind[1564]: New session 22 of user core. Sep 11 00:19:19.992362 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:19:20.364062 sshd[5808]: Connection closed by 10.0.0.1 port 49178 Sep 11 00:19:20.364602 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:20.379155 systemd[1]: sshd@21-10.0.0.70:22-10.0.0.1:49178.service: Deactivated successfully. Sep 11 00:19:20.382273 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:19:20.387323 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:19:20.392685 systemd[1]: Started sshd@22-10.0.0.70:22-10.0.0.1:45272.service - OpenSSH per-connection server daemon (10.0.0.1:45272). Sep 11 00:19:20.394543 systemd-logind[1564]: Removed session 22. Sep 11 00:19:20.456823 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 45272 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:20.459297 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:20.466711 systemd-logind[1564]: New session 23 of user core. Sep 11 00:19:20.481565 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:19:20.603456 sshd[5823]: Connection closed by 10.0.0.1 port 45272 Sep 11 00:19:20.603834 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:20.609283 systemd[1]: sshd@22-10.0.0.70:22-10.0.0.1:45272.service: Deactivated successfully. Sep 11 00:19:20.612158 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:19:20.613335 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:19:20.615217 systemd-logind[1564]: Removed session 23. Sep 11 00:19:23.857082 kubelet[2767]: E0911 00:19:23.857014 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:19:25.622243 systemd[1]: Started sshd@23-10.0.0.70:22-10.0.0.1:45276.service - OpenSSH per-connection server daemon (10.0.0.1:45276). Sep 11 00:19:25.689931 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 45276 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:25.692374 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:25.698618 systemd-logind[1564]: New session 24 of user core. Sep 11 00:19:25.705362 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:19:25.981222 containerd[1588]: time="2025-09-11T00:19:25.981061035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96\" id:\"4941f693d22b7ccfc8de7642f5b9404c5c6571b5d8e8eb813e397876b87c50b4\" pid:5863 exited_at:{seconds:1757549965 nanos:980751289}" Sep 11 00:19:25.991707 sshd[5842]: Connection closed by 10.0.0.1 port 45276 Sep 11 00:19:25.992093 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:25.997113 systemd[1]: sshd@23-10.0.0.70:22-10.0.0.1:45276.service: Deactivated successfully. Sep 11 00:19:25.999496 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:19:26.000523 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:19:26.001823 systemd-logind[1564]: Removed session 24. Sep 11 00:19:26.230300 containerd[1588]: time="2025-09-11T00:19:26.230134402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bfbe139347622156be6cb63456946dcd62e1966cea0ea543952015979281f96\" id:\"b8df334ea4a90ec92ff3ed86f505511e6d2b03234f031e27e1ecbbb9b7e1f4b7\" pid:5888 exited_at:{seconds:1757549966 nanos:229832881}" Sep 11 00:19:30.480075 containerd[1588]: time="2025-09-11T00:19:30.479976133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"361a0d5e7113dc03dcb51d097e926c4faa673b07b835dafda1fb2bd04d522e8a\" id:\"a49130cf41b1b611fdd7db713a47daa186351e198a208fa961c816ffad4195de\" pid:5915 exited_at:{seconds:1757549970 nanos:479550312}" Sep 11 00:19:31.007394 systemd[1]: Started sshd@24-10.0.0.70:22-10.0.0.1:41310.service - OpenSSH per-connection server daemon (10.0.0.1:41310). Sep 11 00:19:31.064424 sshd[5929]: Accepted publickey for core from 10.0.0.1 port 41310 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:31.067339 sshd-session[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:31.073274 systemd-logind[1564]: New session 25 of user core. Sep 11 00:19:31.083365 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:19:31.205600 sshd[5932]: Connection closed by 10.0.0.1 port 41310 Sep 11 00:19:31.206040 sshd-session[5929]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:31.211359 systemd[1]: sshd@24-10.0.0.70:22-10.0.0.1:41310.service: Deactivated successfully. Sep 11 00:19:31.214335 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:19:31.215369 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:19:31.217475 systemd-logind[1564]: Removed session 25. Sep 11 00:19:32.144561 containerd[1588]: time="2025-09-11T00:19:32.144503289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62482a23c23621fa4cd3e831ecd8321db32cb63fe39d0d043d18684e18b1f57e\" id:\"3d01595b9bd781401f05d11c815a7069ed4f82b75cd46b4b06349a4097e43e4e\" pid:5957 exited_at:{seconds:1757549972 nanos:144110723}" Sep 11 00:19:36.224837 systemd[1]: Started sshd@25-10.0.0.70:22-10.0.0.1:41326.service - OpenSSH per-connection server daemon (10.0.0.1:41326). Sep 11 00:19:36.299418 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 41326 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:36.301468 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:36.306904 systemd-logind[1564]: New session 26 of user core. Sep 11 00:19:36.316464 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 11 00:19:36.532023 sshd[5972]: Connection closed by 10.0.0.1 port 41326 Sep 11 00:19:36.532411 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:36.540318 systemd[1]: sshd@25-10.0.0.70:22-10.0.0.1:41326.service: Deactivated successfully. Sep 11 00:19:36.543068 systemd[1]: session-26.scope: Deactivated successfully. Sep 11 00:19:36.544187 systemd-logind[1564]: Session 26 logged out. Waiting for processes to exit. Sep 11 00:19:36.546321 systemd-logind[1564]: Removed session 26. Sep 11 00:19:41.549882 systemd[1]: Started sshd@26-10.0.0.70:22-10.0.0.1:40192.service - OpenSSH per-connection server daemon (10.0.0.1:40192). Sep 11 00:19:41.621877 sshd[5987]: Accepted publickey for core from 10.0.0.1 port 40192 ssh2: RSA SHA256:iG/lPcoyZucxTWaZiRVFFdQ+jOuDk1s0lgCqGD+sReM Sep 11 00:19:41.623677 sshd-session[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:19:41.631923 systemd-logind[1564]: New session 27 of user core. Sep 11 00:19:41.641370 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 11 00:19:41.849974 sshd[5991]: Connection closed by 10.0.0.1 port 40192 Sep 11 00:19:41.850704 sshd-session[5987]: pam_unix(sshd:session): session closed for user core Sep 11 00:19:41.858014 systemd[1]: sshd@26-10.0.0.70:22-10.0.0.1:40192.service: Deactivated successfully. Sep 11 00:19:41.861084 systemd[1]: session-27.scope: Deactivated successfully. Sep 11 00:19:41.863111 systemd-logind[1564]: Session 27 logged out. Waiting for processes to exit. Sep 11 00:19:41.866156 systemd-logind[1564]: Removed session 27. Sep 11 00:19:42.857302 kubelet[2767]: E0911 00:19:42.857249 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"