Sep 9 00:25:52.901697 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:13:49 -00 2025 Sep 9 00:25:52.901721 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:25:52.901733 kernel: BIOS-provided physical RAM map: Sep 9 00:25:52.901740 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:25:52.901746 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:25:52.901752 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:25:52.901769 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:25:52.901776 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:25:52.901787 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:25:52.901796 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:25:52.901803 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 00:25:52.901809 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:25:52.901816 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:25:52.901823 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:25:52.901831 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:25:52.901841 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:25:52.901850 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:25:52.901857 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:25:52.901877 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:25:52.901884 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:25:52.901890 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:25:52.901907 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:25:52.901915 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:25:52.901931 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:25:52.901947 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:25:52.901975 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:25:52.901983 kernel: NX (Execute Disable) protection: active Sep 9 00:25:52.901990 kernel: APIC: Static calls initialized Sep 9 00:25:52.901997 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 00:25:52.902005 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 00:25:52.902011 kernel: extended physical RAM map: Sep 9 00:25:52.902019 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:25:52.902026 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:25:52.902033 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:25:52.902049 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:25:52.902065 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:25:52.902075 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:25:52.902082 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:25:52.902089 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 00:25:52.902096 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 00:25:52.902107 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 00:25:52.902114 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 00:25:52.902124 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 00:25:52.902131 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:25:52.902139 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:25:52.902146 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:25:52.902153 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:25:52.902160 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:25:52.902167 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:25:52.902175 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:25:52.902182 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:25:52.902189 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:25:52.902199 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:25:52.902206 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:25:52.902213 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:25:52.902220 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:25:52.902227 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:25:52.902235 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:25:52.902244 kernel: efi: EFI v2.7 by EDK II Sep 9 00:25:52.902252 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 00:25:52.902259 kernel: random: crng init done Sep 9 00:25:52.902268 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 00:25:52.902276 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 00:25:52.902287 kernel: secureboot: Secure boot disabled Sep 9 00:25:52.902294 kernel: SMBIOS 2.8 present. Sep 9 00:25:52.902301 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 00:25:52.902309 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:25:52.902316 kernel: Hypervisor detected: KVM Sep 9 00:25:52.902323 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:25:52.902330 kernel: kvm-clock: using sched offset of 5246510009 cycles Sep 9 00:25:52.902338 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:25:52.902346 kernel: tsc: Detected 2794.750 MHz processor Sep 9 00:25:52.902353 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:25:52.902361 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:25:52.902370 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 00:25:52.902378 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:25:52.902385 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:25:52.902393 kernel: Using GB pages for direct mapping Sep 9 00:25:52.902400 kernel: ACPI: Early table checksum verification disabled Sep 9 00:25:52.902407 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:25:52.902415 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:25:52.902423 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:52.902430 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:52.902439 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:25:52.902447 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:52.902454 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:52.902462 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:52.902469 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:25:52.902477 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:25:52.902484 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:25:52.902492 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:25:52.902501 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:25:52.902508 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:25:52.902516 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:25:52.902523 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:25:52.902556 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:25:52.902565 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:25:52.902572 kernel: No NUMA configuration found Sep 9 00:25:52.902579 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 00:25:52.902587 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 00:25:52.902597 kernel: Zone ranges: Sep 9 00:25:52.902605 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:25:52.902612 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 00:25:52.902619 kernel: Normal empty Sep 9 00:25:52.902627 kernel: Device empty Sep 9 00:25:52.902634 kernel: Movable zone start for each node Sep 9 00:25:52.902642 kernel: Early memory node ranges Sep 9 00:25:52.902651 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:25:52.902658 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:25:52.902668 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:25:52.902678 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 00:25:52.902685 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 00:25:52.902693 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 00:25:52.902700 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 00:25:52.902708 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 00:25:52.902715 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 00:25:52.902722 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:25:52.902732 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:25:52.902748 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:25:52.902756 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:25:52.902769 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 00:25:52.902777 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 00:25:52.902787 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 00:25:52.902795 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 00:25:52.902803 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 00:25:52.902811 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:25:52.902818 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:25:52.902828 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:25:52.902836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:25:52.902843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:25:52.902851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:25:52.902859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:25:52.902866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:25:52.902874 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:25:52.902881 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:25:52.902889 kernel: TSC deadline timer available Sep 9 00:25:52.902899 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:25:52.902906 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:25:52.902914 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:25:52.902921 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:25:52.902928 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:25:52.902936 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:25:52.902943 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:25:52.902951 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:25:52.902958 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:25:52.902968 kernel: kvm-guest: setup PV sched yield Sep 9 00:25:52.902976 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 00:25:52.902983 kernel: Booting paravirtualized kernel on KVM Sep 9 00:25:52.902991 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:25:52.902999 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:25:52.903007 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:25:52.903014 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:25:52.903022 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:25:52.903030 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:25:52.903040 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:25:52.903049 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:25:52.903060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:25:52.903067 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:25:52.903075 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:25:52.903082 kernel: Fallback order for Node 0: 0 Sep 9 00:25:52.903090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 00:25:52.903097 kernel: Policy zone: DMA32 Sep 9 00:25:52.903107 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:25:52.903115 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:25:52.903123 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 00:25:52.903130 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:25:52.903137 kernel: Dynamic Preempt: voluntary Sep 9 00:25:52.903145 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:25:52.903154 kernel: rcu: RCU event tracing is enabled. Sep 9 00:25:52.903162 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:25:52.903169 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:25:52.903177 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:25:52.903187 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:25:52.903195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:25:52.903205 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:25:52.903212 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:25:52.903220 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:25:52.903228 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:25:52.903235 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:25:52.903243 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:25:52.903250 kernel: Console: colour dummy device 80x25 Sep 9 00:25:52.903261 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:25:52.903268 kernel: ACPI: Core revision 20240827 Sep 9 00:25:52.903276 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:25:52.903284 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:25:52.903291 kernel: x2apic enabled Sep 9 00:25:52.903299 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:25:52.903306 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:25:52.903314 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:25:52.903322 kernel: kvm-guest: setup PV IPIs Sep 9 00:25:52.903331 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:25:52.903339 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 9 00:25:52.903347 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 9 00:25:52.903363 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:25:52.903374 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:25:52.903389 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:25:52.903404 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:25:52.903418 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:25:52.903428 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:25:52.903436 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:25:52.903443 kernel: active return thunk: retbleed_return_thunk Sep 9 00:25:52.903451 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:25:52.903462 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:25:52.903470 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:25:52.903477 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:25:52.903485 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:25:52.903493 kernel: active return thunk: srso_return_thunk Sep 9 00:25:52.903504 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:25:52.903512 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:25:52.903519 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:25:52.903527 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:25:52.903549 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:25:52.903557 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:25:52.903564 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:25:52.903572 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:25:52.903579 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:25:52.903590 kernel: landlock: Up and running. Sep 9 00:25:52.903597 kernel: SELinux: Initializing. Sep 9 00:25:52.903605 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:25:52.903613 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:25:52.903621 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:25:52.903628 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:25:52.903636 kernel: ... version: 0 Sep 9 00:25:52.903643 kernel: ... bit width: 48 Sep 9 00:25:52.903651 kernel: ... generic registers: 6 Sep 9 00:25:52.903661 kernel: ... value mask: 0000ffffffffffff Sep 9 00:25:52.903668 kernel: ... max period: 00007fffffffffff Sep 9 00:25:52.903676 kernel: ... fixed-purpose events: 0 Sep 9 00:25:52.903683 kernel: ... event mask: 000000000000003f Sep 9 00:25:52.903690 kernel: signal: max sigframe size: 1776 Sep 9 00:25:52.903698 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:25:52.903706 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:25:52.903716 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:25:52.903724 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:25:52.903733 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:25:52.903741 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:25:52.903748 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:25:52.903756 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 9 00:25:52.903771 kernel: Memory: 2422676K/2565800K available (14336K kernel code, 2428K rwdata, 9960K rodata, 54036K init, 2932K bss, 137196K reserved, 0K cma-reserved) Sep 9 00:25:52.903778 kernel: devtmpfs: initialized Sep 9 00:25:52.903786 kernel: x86/mm: Memory block size: 128MB Sep 9 00:25:52.903794 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:25:52.903801 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:25:52.903812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 00:25:52.903819 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:25:52.903838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 00:25:52.903847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:25:52.903864 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:25:52.903872 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:25:52.903880 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:25:52.903887 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:25:52.903898 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:25:52.903906 kernel: audit: type=2000 audit(1757377549.584:1): state=initialized audit_enabled=0 res=1 Sep 9 00:25:52.903914 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:25:52.903921 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:25:52.903929 kernel: cpuidle: using governor menu Sep 9 00:25:52.903936 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:25:52.903944 kernel: dca service started, version 1.12.1 Sep 9 00:25:52.903951 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 00:25:52.903959 kernel: PCI: Using configuration type 1 for base access Sep 9 00:25:52.903969 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:25:52.903977 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:25:52.903984 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:25:52.903992 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:25:52.903999 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:25:52.904007 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:25:52.904014 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:25:52.904022 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:25:52.904029 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:25:52.904039 kernel: ACPI: Interpreter enabled Sep 9 00:25:52.904047 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:25:52.904054 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:25:52.904062 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:25:52.904069 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:25:52.904077 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:25:52.904085 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:25:52.904315 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:25:52.904446 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:25:52.904621 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:25:52.904635 kernel: PCI host bridge to bus 0000:00 Sep 9 00:25:52.904791 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:25:52.904905 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:25:52.905013 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:25:52.905121 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 00:25:52.905233 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 00:25:52.905351 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:25:52.905470 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:25:52.905792 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:25:52.905953 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:25:52.906079 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:25:52.906205 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 00:25:52.906324 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 00:25:52.906443 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:25:52.906686 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:25:52.906823 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 00:25:52.906991 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 00:25:52.907129 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 00:25:52.907273 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:25:52.907402 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 00:25:52.907522 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 00:25:52.907668 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 00:25:52.907814 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:25:52.907936 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 00:25:52.908080 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 00:25:52.908211 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 00:25:52.908332 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 00:25:52.908465 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:25:52.908607 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:25:52.908745 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:25:52.908883 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 00:25:52.909038 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 00:25:52.909188 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:25:52.909311 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 00:25:52.909322 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:25:52.909330 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:25:52.909338 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:25:52.909346 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:25:52.909354 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:25:52.909362 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:25:52.909373 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:25:52.909381 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:25:52.909389 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:25:52.909397 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:25:52.909404 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:25:52.909412 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:25:52.909420 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:25:52.909428 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:25:52.909436 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:25:52.909446 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:25:52.909459 kernel: iommu: Default domain type: Translated Sep 9 00:25:52.909467 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:25:52.909475 kernel: efivars: Registered efivars operations Sep 9 00:25:52.909483 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:25:52.909490 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:25:52.909498 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:25:52.909506 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 00:25:52.909514 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 00:25:52.909524 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 00:25:52.909547 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 00:25:52.909555 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 00:25:52.909563 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 00:25:52.909571 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 00:25:52.909702 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:25:52.909835 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:25:52.909965 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:25:52.909980 kernel: vgaarb: loaded Sep 9 00:25:52.909988 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:25:52.909996 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:25:52.910003 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:25:52.910011 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:25:52.910019 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:25:52.910027 kernel: pnp: PnP ACPI init Sep 9 00:25:52.910191 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 00:25:52.910209 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:25:52.910217 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:25:52.910225 kernel: NET: Registered PF_INET protocol family Sep 9 00:25:52.910233 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:25:52.910242 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:25:52.910252 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:25:52.910268 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:25:52.910280 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:25:52.910291 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:25:52.910306 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:25:52.910317 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:25:52.910327 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:25:52.910337 kernel: NET: Registered PF_XDP protocol family Sep 9 00:25:52.910482 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 00:25:52.911812 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 00:25:52.911943 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:25:52.912058 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:25:52.912191 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:25:52.912305 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 00:25:52.912422 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 00:25:52.912553 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:25:52.912565 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:25:52.912583 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 9 00:25:52.912601 kernel: Initialise system trusted keyrings Sep 9 00:25:52.912616 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:25:52.912625 kernel: Key type asymmetric registered Sep 9 00:25:52.912633 kernel: Asymmetric key parser 'x509' registered Sep 9 00:25:52.912642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:25:52.912651 kernel: io scheduler mq-deadline registered Sep 9 00:25:52.912659 kernel: io scheduler kyber registered Sep 9 00:25:52.912667 kernel: io scheduler bfq registered Sep 9 00:25:52.912678 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:25:52.912687 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:25:52.912695 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:25:52.912704 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:25:52.912712 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:25:52.912721 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:25:52.912729 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:25:52.912738 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:25:52.912746 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:25:52.912911 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:25:52.912926 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:25:52.913041 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:25:52.913203 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:25:52 UTC (1757377552) Sep 9 00:25:52.913358 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 00:25:52.913375 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:25:52.913387 kernel: efifb: probing for efifb Sep 9 00:25:52.913398 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 00:25:52.913415 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 00:25:52.913426 kernel: efifb: scrolling: redraw Sep 9 00:25:52.913437 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 00:25:52.913448 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 00:25:52.913459 kernel: fb0: EFI VGA frame buffer device Sep 9 00:25:52.913482 kernel: pstore: Using crash dump compression: deflate Sep 9 00:25:52.913493 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:25:52.913503 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:25:52.913514 kernel: Segment Routing with IPv6 Sep 9 00:25:52.913559 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:25:52.913571 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:25:52.913601 kernel: Key type dns_resolver registered Sep 9 00:25:52.913620 kernel: IPI shorthand broadcast: enabled Sep 9 00:25:52.913639 kernel: sched_clock: Marking stable (3571004958, 164291000)->(3836372969, -101077011) Sep 9 00:25:52.913650 kernel: registered taskstats version 1 Sep 9 00:25:52.913660 kernel: Loading compiled-in X.509 certificates Sep 9 00:25:52.913670 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f610abecf8d2943295243a86f7aa958542b6f677' Sep 9 00:25:52.913680 kernel: Demotion targets for Node 0: null Sep 9 00:25:52.913696 kernel: Key type .fscrypt registered Sep 9 00:25:52.913706 kernel: Key type fscrypt-provisioning registered Sep 9 00:25:52.913715 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:25:52.913724 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:25:52.913734 kernel: ima: No architecture policies found Sep 9 00:25:52.913743 kernel: clk: Disabling unused clocks Sep 9 00:25:52.913751 kernel: Warning: unable to open an initial console. Sep 9 00:25:52.913768 kernel: Freeing unused kernel image (initmem) memory: 54036K Sep 9 00:25:52.913779 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:25:52.913796 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 9 00:25:52.913805 kernel: Run /init as init process Sep 9 00:25:52.913821 kernel: with arguments: Sep 9 00:25:52.913837 kernel: /init Sep 9 00:25:52.913846 kernel: with environment: Sep 9 00:25:52.913854 kernel: HOME=/ Sep 9 00:25:52.913862 kernel: TERM=linux Sep 9 00:25:52.913870 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:25:52.913890 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:25:52.914650 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:25:52.914665 systemd[1]: Detected virtualization kvm. Sep 9 00:25:52.914676 systemd[1]: Detected architecture x86-64. Sep 9 00:25:52.914690 systemd[1]: Running in initrd. Sep 9 00:25:52.914703 systemd[1]: No hostname configured, using default hostname. Sep 9 00:25:52.914718 systemd[1]: Hostname set to . Sep 9 00:25:52.914732 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:25:52.914750 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:25:52.914780 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:25:52.914795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:25:52.914811 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:25:52.914826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:25:52.914841 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:25:52.914857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:25:52.914876 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:25:52.914888 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:25:52.914899 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:25:52.914911 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:25:52.914921 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:25:52.914933 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:25:52.914944 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:25:52.914956 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:25:52.914971 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:25:52.914982 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:25:52.914991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:25:52.914999 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:25:52.915008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:25:52.915017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:25:52.915026 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:25:52.915037 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:25:52.915048 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:25:52.915056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:25:52.915065 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:25:52.915074 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:25:52.915082 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:25:52.915091 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:25:52.915102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:25:52.915113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:25:52.915128 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:25:52.915142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:25:52.915154 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:25:52.915166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:25:52.915212 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 00:25:52.915240 systemd-journald[220]: Journal started Sep 9 00:25:52.915260 systemd-journald[220]: Runtime Journal (/run/log/journal/0f7495de07cd40a880ebc805918bb1fd) is 6M, max 48.4M, 42.4M free. Sep 9 00:25:52.903965 systemd-modules-load[221]: Inserted module 'overlay' Sep 9 00:25:52.919423 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:25:52.918017 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:25:52.925088 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:25:52.929667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:25:52.972585 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:25:52.974079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:25:52.977868 kernel: Bridge firewalling registered Sep 9 00:25:52.977211 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 9 00:25:52.977906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:25:52.978839 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:25:52.983384 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:25:52.985667 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:25:52.988706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:25:53.000781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:25:53.011185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:25:53.014476 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:25:53.019162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:25:53.034155 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:25:53.069997 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:25:53.070862 systemd-resolved[259]: Positive Trust Anchors: Sep 9 00:25:53.070870 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:25:53.070898 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:25:53.073501 systemd-resolved[259]: Defaulting to hostname 'linux'. Sep 9 00:25:53.074823 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:25:53.076512 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:25:53.241575 kernel: SCSI subsystem initialized Sep 9 00:25:53.250585 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:25:53.261567 kernel: iscsi: registered transport (tcp) Sep 9 00:25:53.284586 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:25:53.284615 kernel: QLogic iSCSI HBA Driver Sep 9 00:25:53.309292 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:25:53.333805 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:25:53.337665 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:25:53.399162 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:25:53.403041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:25:53.468577 kernel: raid6: avx2x4 gen() 25176 MB/s Sep 9 00:25:53.485560 kernel: raid6: avx2x2 gen() 28473 MB/s Sep 9 00:25:53.502677 kernel: raid6: avx2x1 gen() 23167 MB/s Sep 9 00:25:53.502703 kernel: raid6: using algorithm avx2x2 gen() 28473 MB/s Sep 9 00:25:53.520676 kernel: raid6: .... xor() 18149 MB/s, rmw enabled Sep 9 00:25:53.520699 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:25:53.543585 kernel: xor: automatically using best checksumming function avx Sep 9 00:25:53.719617 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:25:53.729707 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:25:53.732039 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:25:53.776413 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 00:25:53.784075 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:25:53.785271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:25:53.817520 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 9 00:25:53.867146 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:25:53.869002 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:25:53.957805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:25:53.962711 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:25:54.006744 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:25:54.009848 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:25:54.010170 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:25:54.015667 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:25:54.015699 kernel: GPT:9289727 != 19775487 Sep 9 00:25:54.015713 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:25:54.017407 kernel: GPT:9289727 != 19775487 Sep 9 00:25:54.017429 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:25:54.017703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:54.028060 kernel: AES CTR mode by8 optimization enabled Sep 9 00:25:54.044334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:25:54.044432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:25:54.049933 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:25:54.055273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:25:54.059598 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:25:54.089605 kernel: libata version 3.00 loaded. Sep 9 00:25:54.092569 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:25:54.097367 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:25:54.116567 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:25:54.116867 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:25:54.121306 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:25:54.121555 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:25:54.121751 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:25:54.124569 kernel: scsi host0: ahci Sep 9 00:25:54.125648 kernel: scsi host1: ahci Sep 9 00:25:54.126553 kernel: scsi host2: ahci Sep 9 00:25:54.127554 kernel: scsi host3: ahci Sep 9 00:25:54.128553 kernel: scsi host4: ahci Sep 9 00:25:54.129557 kernel: scsi host5: ahci Sep 9 00:25:54.130550 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 00:25:54.130586 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 00:25:54.130600 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 00:25:54.130614 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 00:25:54.130628 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 00:25:54.130682 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 00:25:54.141006 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:25:54.154630 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:25:54.167847 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:25:54.171060 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:25:54.174457 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:25:54.176814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:25:54.177904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:25:54.180424 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:25:54.193506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:25:54.196355 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:25:54.203172 disk-uuid[633]: Primary Header is updated. Sep 9 00:25:54.203172 disk-uuid[633]: Secondary Entries is updated. Sep 9 00:25:54.203172 disk-uuid[633]: Secondary Header is updated. Sep 9 00:25:54.207567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:54.213570 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:54.217201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:25:54.442880 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:25:54.442957 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:25:54.442968 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:25:54.444579 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:25:54.444660 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:25:54.445575 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:25:54.446646 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:25:54.446682 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:25:54.447711 kernel: ata3.00: applying bridge limits Sep 9 00:25:54.448863 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:25:54.448875 kernel: ata3.00: configured for UDMA/100 Sep 9 00:25:54.449574 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:25:54.512120 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:25:54.512422 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:25:54.532562 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:25:54.963169 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:25:54.966467 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:25:54.969329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:25:54.972137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:25:54.975741 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:25:55.004420 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:25:55.214598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:25:55.215058 disk-uuid[635]: The operation has completed successfully. Sep 9 00:25:55.253448 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:25:55.253587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:25:55.288594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:25:55.314637 sh[668]: Success Sep 9 00:25:55.334871 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:25:55.334951 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:25:55.336119 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:25:55.345559 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:25:55.379072 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:25:55.383232 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:25:55.401559 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:25:55.409748 kernel: BTRFS: device fsid eee400a1-88b9-480b-9c0c-54d171140f9a devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (680) Sep 9 00:25:55.409839 kernel: BTRFS info (device dm-0): first mount of filesystem eee400a1-88b9-480b-9c0c-54d171140f9a Sep 9 00:25:55.411066 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:25:55.416913 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:25:55.416988 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:25:55.419012 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:25:55.421907 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:25:55.424556 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:25:55.427690 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:25:55.431006 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:25:55.459622 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Sep 9 00:25:55.462720 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:25:55.462753 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:25:55.467576 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:25:55.467638 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:25:55.472597 kernel: BTRFS info (device vda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:25:55.474244 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:25:55.478245 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:25:55.631001 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:25:55.637558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:25:55.651626 ignition[760]: Ignition 2.21.0 Sep 9 00:25:55.651643 ignition[760]: Stage: fetch-offline Sep 9 00:25:55.651686 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:55.651710 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:55.651819 ignition[760]: parsed url from cmdline: "" Sep 9 00:25:55.651825 ignition[760]: no config URL provided Sep 9 00:25:55.651832 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:25:55.651844 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:25:55.651875 ignition[760]: op(1): [started] loading QEMU firmware config module Sep 9 00:25:55.651882 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:25:55.667619 ignition[760]: op(1): [finished] loading QEMU firmware config module Sep 9 00:25:55.696643 systemd-networkd[856]: lo: Link UP Sep 9 00:25:55.696657 systemd-networkd[856]: lo: Gained carrier Sep 9 00:25:55.698653 systemd-networkd[856]: Enumeration completed Sep 9 00:25:55.698849 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:25:55.699058 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:25:55.699063 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:25:55.700071 systemd-networkd[856]: eth0: Link UP Sep 9 00:25:55.700266 systemd-networkd[856]: eth0: Gained carrier Sep 9 00:25:55.700277 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:25:55.702726 systemd[1]: Reached target network.target - Network. Sep 9 00:25:55.721639 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:25:55.728555 ignition[760]: parsing config with SHA512: e454895acacc141b108d73535337655fe04eb5bb9abe69b9e4332de07bd3474d79068147c6b06fc2fd37529aae7d28d3dff78d6aed78de51611c796c1902f874 Sep 9 00:25:55.734829 unknown[760]: fetched base config from "system" Sep 9 00:25:55.734848 unknown[760]: fetched user config from "qemu" Sep 9 00:25:55.735320 ignition[760]: fetch-offline: fetch-offline passed Sep 9 00:25:55.735390 ignition[760]: Ignition finished successfully Sep 9 00:25:55.739065 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:25:55.741422 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:25:55.744031 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:25:55.783700 ignition[863]: Ignition 2.21.0 Sep 9 00:25:55.783721 ignition[863]: Stage: kargs Sep 9 00:25:55.784408 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:55.784424 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:55.786660 ignition[863]: kargs: kargs passed Sep 9 00:25:55.786759 ignition[863]: Ignition finished successfully Sep 9 00:25:55.792365 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:25:55.797031 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:25:55.856095 ignition[871]: Ignition 2.21.0 Sep 9 00:25:55.856117 ignition[871]: Stage: disks Sep 9 00:25:55.856369 ignition[871]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:55.856383 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:55.858837 ignition[871]: disks: disks passed Sep 9 00:25:55.860134 ignition[871]: Ignition finished successfully Sep 9 00:25:55.866649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:25:55.868211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:25:55.870249 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:25:55.872621 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:25:55.874858 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:25:55.875073 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:25:55.876987 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:25:55.927843 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:25:55.953335 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:25:55.957114 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:25:56.087611 kernel: EXT4-fs (vda9): mounted filesystem 91c315eb-0fc3-4e95-bf9b-06acc06be6bc r/w with ordered data mode. Quota mode: none. Sep 9 00:25:56.088781 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:25:56.090596 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:25:56.093884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:25:56.097181 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:25:56.098792 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:25:56.098852 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:25:56.098883 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:25:56.116286 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:25:56.122883 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Sep 9 00:25:56.122917 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:25:56.122933 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:25:56.119255 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:25:56.128581 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:25:56.128640 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:25:56.131034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:25:56.188160 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:25:56.194331 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:25:56.200012 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:25:56.205129 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:25:56.331962 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:25:56.333974 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:25:56.336268 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:25:56.366589 kernel: BTRFS info (device vda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:25:56.382738 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:25:56.400942 ignition[1003]: INFO : Ignition 2.21.0 Sep 9 00:25:56.400942 ignition[1003]: INFO : Stage: mount Sep 9 00:25:56.403069 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:56.403069 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:56.403069 ignition[1003]: INFO : mount: mount passed Sep 9 00:25:56.403069 ignition[1003]: INFO : Ignition finished successfully Sep 9 00:25:56.409057 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:25:56.409650 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:25:56.413980 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:25:56.451017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:25:56.485279 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 9 00:25:56.485347 kernel: BTRFS info (device vda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:25:56.485360 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:25:56.490578 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:25:56.490634 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:25:56.492525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:25:56.541566 ignition[1032]: INFO : Ignition 2.21.0 Sep 9 00:25:56.541566 ignition[1032]: INFO : Stage: files Sep 9 00:25:56.543449 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:56.543449 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:56.545745 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:25:56.546852 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:25:56.546852 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:25:56.549599 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:25:56.549599 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:25:56.552655 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:25:56.552655 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:25:56.552655 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:25:56.549802 unknown[1032]: wrote ssh authorized keys file for user: core Sep 9 00:25:56.601779 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:25:56.836014 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:25:56.836014 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:25:56.840101 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:25:56.840101 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:25:56.840101 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:25:56.840101 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:25:56.840101 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:25:56.840101 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:25:56.840101 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:25:56.853504 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:25:56.853504 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:25:56.853504 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:25:56.853504 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:25:56.853504 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:25:56.853504 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:25:56.862301 systemd-networkd[856]: eth0: Gained IPv6LL Sep 9 00:25:57.180203 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:25:57.882161 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:25:57.882161 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:25:57.886464 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:25:58.076143 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:25:58.076143 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:25:58.076143 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:25:58.076143 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:25:58.084036 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:25:58.084036 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:25:58.084036 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:25:58.119814 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:25:58.136001 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:25:58.136001 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:25:58.136001 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:25:58.136001 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:25:58.136001 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:25:58.136001 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:25:58.136001 ignition[1032]: INFO : files: files passed Sep 9 00:25:58.136001 ignition[1032]: INFO : Ignition finished successfully Sep 9 00:25:58.130732 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:25:58.137375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:25:58.140816 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:25:58.188130 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:25:58.188519 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:25:58.188711 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:25:58.200582 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:25:58.200582 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:25:58.204305 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:25:58.208058 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:25:58.211261 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:25:58.212335 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:25:58.267360 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:25:58.267507 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:25:58.268903 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:25:58.270978 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:25:58.271550 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:25:58.274788 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:25:58.314113 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:25:58.315753 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:25:58.346718 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:25:58.346960 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:25:58.350702 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:25:58.353259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:25:58.353427 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:25:58.356087 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:25:58.356455 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:25:58.357063 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:25:58.357462 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:25:58.358129 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:25:58.358555 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:25:58.359194 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:25:58.359572 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:25:58.360186 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:25:58.360606 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:25:58.361204 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:25:58.361937 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:25:58.362070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:25:58.385709 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:25:58.385956 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:25:58.388426 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:25:58.388648 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:25:58.391918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:25:58.392113 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:25:58.395002 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:25:58.395163 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:25:58.398317 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:25:58.400081 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:25:58.400295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:25:58.402058 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:25:58.402375 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:25:58.402896 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:25:58.403018 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:25:58.408915 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:25:58.409004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:25:58.411321 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:25:58.411465 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:25:58.413926 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:25:58.414035 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:25:58.420431 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:25:58.421106 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:25:58.421265 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:25:58.425902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:25:58.426897 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:25:58.427125 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:25:58.428067 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:25:58.428211 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:25:58.438135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:25:58.438472 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:25:58.458319 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:25:58.464828 ignition[1088]: INFO : Ignition 2.21.0 Sep 9 00:25:58.464828 ignition[1088]: INFO : Stage: umount Sep 9 00:25:58.468406 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:25:58.468406 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:25:58.477289 ignition[1088]: INFO : umount: umount passed Sep 9 00:25:58.478384 ignition[1088]: INFO : Ignition finished successfully Sep 9 00:25:58.481150 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:25:58.481313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:25:58.483549 systemd[1]: Stopped target network.target - Network. Sep 9 00:25:58.486523 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:25:58.486678 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:25:58.487062 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:25:58.487115 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:25:58.487367 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:25:58.487427 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:25:58.487899 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:25:58.487951 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:25:58.488380 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:25:58.497702 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:25:58.507914 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:25:58.508134 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:25:58.513419 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:25:58.513769 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:25:58.513944 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:25:58.519098 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:25:58.520627 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:25:58.523163 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:25:58.523230 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:25:58.526641 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:25:58.528806 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:25:58.528875 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:25:58.531216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:25:58.531281 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:25:58.535360 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:25:58.535451 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:25:58.538552 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:25:58.538626 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:25:58.571037 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:25:58.572714 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:25:58.572791 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:25:58.590735 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:25:58.591370 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:25:58.592941 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:25:58.593032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:25:58.594649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:25:58.594699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:25:58.599524 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:25:58.599648 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:25:58.602574 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:25:58.602655 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:25:58.606872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:25:58.606940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:25:58.610289 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:25:58.611381 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:25:58.611454 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:25:58.615188 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:25:58.615261 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:25:58.621034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:25:58.621087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:25:58.625131 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 00:25:58.625192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:25:58.625242 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:25:58.625602 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:25:58.633849 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:25:58.635370 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:25:58.635487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:25:58.638741 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:25:58.638893 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:25:58.644388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:25:58.644546 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:25:58.648197 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:25:58.650379 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:25:58.673095 systemd[1]: Switching root. Sep 9 00:25:58.719171 systemd-journald[220]: Journal stopped Sep 9 00:26:00.220856 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 00:26:00.220962 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:26:00.220983 kernel: SELinux: policy capability open_perms=1 Sep 9 00:26:00.221011 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:26:00.221038 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:26:00.221056 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:26:00.221068 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:26:00.221079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:26:00.221100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:26:00.221115 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:26:00.221126 kernel: audit: type=1403 audit(1757377559.225:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:26:00.221142 systemd[1]: Successfully loaded SELinux policy in 64.600ms. Sep 9 00:26:00.221193 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.469ms. Sep 9 00:26:00.221207 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:26:00.221220 systemd[1]: Detected virtualization kvm. Sep 9 00:26:00.221233 systemd[1]: Detected architecture x86-64. Sep 9 00:26:00.221248 systemd[1]: Detected first boot. Sep 9 00:26:00.221261 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:26:00.221273 zram_generator::config[1134]: No configuration found. Sep 9 00:26:00.221287 kernel: Guest personality initialized and is inactive Sep 9 00:26:00.221298 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:26:00.221312 kernel: Initialized host personality Sep 9 00:26:00.221328 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:26:00.221345 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:26:00.221360 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:26:00.221373 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:26:00.221385 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:26:00.221398 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:26:00.221411 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:26:00.221426 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:26:00.221438 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:26:00.221450 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:26:00.221463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:26:00.221482 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:26:00.221495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:26:00.221508 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:26:00.221523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:26:00.222607 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:26:00.222640 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:26:00.222657 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:26:00.222674 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:26:00.222691 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:26:00.222707 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:26:00.222724 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:26:00.222741 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:26:00.222757 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:26:00.222779 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:26:00.222795 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:26:00.222811 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:26:00.222827 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:26:00.222844 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:26:00.222865 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:26:00.222882 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:26:00.222898 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:26:00.222913 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:26:00.222937 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:26:00.222955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:26:00.222976 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:26:00.222993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:26:00.223010 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:26:00.223034 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:26:00.223050 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:26:00.223067 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:26:00.223084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:00.223105 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:26:00.223122 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:26:00.223138 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:26:00.223156 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:26:00.223173 systemd[1]: Reached target machines.target - Containers. Sep 9 00:26:00.223189 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:26:00.223206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:26:00.223230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:26:00.223251 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:26:00.223268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:26:00.223284 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:26:00.223300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:26:00.223320 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:26:00.223336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:26:00.223353 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:26:00.223370 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:26:00.223387 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:26:00.223407 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:26:00.223423 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:26:00.223441 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:26:00.223458 kernel: fuse: init (API version 7.41) Sep 9 00:26:00.223482 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:26:00.223501 kernel: loop: module loaded Sep 9 00:26:00.223517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:26:00.223557 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:26:00.223593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:26:00.223616 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:26:00.223665 systemd-journald[1198]: Collecting audit messages is disabled. Sep 9 00:26:00.223707 kernel: ACPI: bus type drm_connector registered Sep 9 00:26:00.223724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:26:00.223747 systemd-journald[1198]: Journal started Sep 9 00:26:00.223781 systemd-journald[1198]: Runtime Journal (/run/log/journal/0f7495de07cd40a880ebc805918bb1fd) is 6M, max 48.4M, 42.4M free. Sep 9 00:25:59.918578 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:25:59.941720 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:25:59.942449 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:26:00.236601 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:26:00.238271 systemd[1]: Stopped verity-setup.service. Sep 9 00:26:00.240585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:00.245009 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:26:00.245967 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:26:00.247147 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:26:00.248629 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:26:00.250614 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:26:00.253834 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:26:00.255375 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:26:00.256857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:26:00.258726 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:26:00.258968 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:26:00.266503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:26:00.266839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:26:00.268985 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:26:00.269304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:26:00.271183 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:26:00.271704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:26:00.273522 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:26:00.273820 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:26:00.275454 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:26:00.275923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:26:00.277742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:26:00.281070 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:26:00.283048 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:26:00.285295 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:26:00.303661 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:26:00.307134 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:26:00.309906 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:26:00.311226 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:26:00.311258 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:26:00.313659 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:26:00.321718 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:26:00.323266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:26:00.325196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:26:00.327975 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:26:00.329265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:26:00.331676 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:26:00.332955 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:26:00.334755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:26:00.338854 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:26:00.347986 systemd-journald[1198]: Time spent on flushing to /var/log/journal/0f7495de07cd40a880ebc805918bb1fd is 49.395ms for 1071 entries. Sep 9 00:26:00.347986 systemd-journald[1198]: System Journal (/var/log/journal/0f7495de07cd40a880ebc805918bb1fd) is 8M, max 195.6M, 187.6M free. Sep 9 00:26:01.003981 systemd-journald[1198]: Received client request to flush runtime journal. Sep 9 00:26:01.004060 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 00:26:01.004108 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:26:01.004133 kernel: loop1: detected capacity change from 0 to 229808 Sep 9 00:26:01.004165 kernel: loop2: detected capacity change from 0 to 111000 Sep 9 00:26:01.004189 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 00:26:01.004210 kernel: loop4: detected capacity change from 0 to 229808 Sep 9 00:26:01.004234 kernel: loop5: detected capacity change from 0 to 111000 Sep 9 00:26:00.342191 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:26:00.345023 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:26:00.369667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:26:00.389621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:26:00.425332 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:26:00.428501 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:26:00.592531 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:26:00.616926 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:26:00.622750 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:26:00.626573 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:26:00.631713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:26:00.790512 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 9 00:26:00.790525 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 9 00:26:00.797168 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:26:01.007141 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:26:01.093225 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:26:01.094092 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 9 00:26:01.189659 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:26:01.189845 systemd[1]: Reloading... Sep 9 00:26:01.267577 zram_generator::config[1303]: No configuration found. Sep 9 00:26:01.573279 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:26:01.652053 systemd[1]: Reloading finished in 461 ms. Sep 9 00:26:01.689499 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:26:01.756113 systemd[1]: Starting ensure-sysext.service... Sep 9 00:26:01.815348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:26:01.829934 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:26:01.829952 systemd[1]: Reloading... Sep 9 00:26:01.848898 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:26:01.848938 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:26:01.849254 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:26:01.849509 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:26:01.851324 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:26:01.851712 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 00:26:01.851789 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 00:26:01.926613 zram_generator::config[1370]: No configuration found. Sep 9 00:26:02.085690 systemd[1]: Reloading finished in 255 ms. Sep 9 00:26:02.111339 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:26:02.111353 systemd-tmpfiles[1340]: Skipping /boot Sep 9 00:26:02.114364 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:26:02.124439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:02.124759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:26:02.126474 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:26:02.126502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:26:02.126829 systemd-tmpfiles[1340]: Skipping /boot Sep 9 00:26:02.142468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:26:02.146798 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:26:02.162503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:26:02.162975 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:26:02.163101 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:02.165908 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:02.166155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:26:02.166368 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:26:02.166505 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:26:02.166682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:02.169781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:02.170048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:26:02.174324 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:26:02.192621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:26:02.192875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:26:02.193189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:26:02.195725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:26:02.196046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:26:02.198073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:26:02.198330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:26:02.200368 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:26:02.200630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:26:02.202317 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:26:02.202611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:26:02.207837 systemd[1]: Finished ensure-sysext.service. Sep 9 00:26:02.213454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:26:02.213561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:26:02.472484 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:26:02.473647 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:26:02.496389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:26:02.502251 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:26:02.520200 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:26:02.524772 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:26:02.550451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:26:02.557420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:26:02.559895 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:26:02.564335 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:26:02.578641 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:26:02.591037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:26:02.651724 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:26:02.665182 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:26:02.674087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:26:02.677054 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:26:02.711307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:26:02.713008 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:26:02.715560 systemd-udevd[1449]: Using default interface naming scheme 'v255'. Sep 9 00:26:02.731087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:26:02.733317 augenrules[1456]: No rules Sep 9 00:26:02.739458 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:26:02.739930 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:26:02.746423 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:26:02.754719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:26:02.832039 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:26:02.848259 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:26:02.849856 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:26:02.922034 systemd-resolved[1425]: Positive Trust Anchors: Sep 9 00:26:02.922372 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:26:02.922472 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:26:02.923323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:26:02.926693 systemd-resolved[1425]: Defaulting to hostname 'linux'. Sep 9 00:26:02.926957 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:26:02.931218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:26:02.932767 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:26:02.934322 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:26:02.934462 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:26:02.938874 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:26:02.940465 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:26:02.942231 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:26:02.943852 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:26:02.945447 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:26:02.947166 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:26:02.947202 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:26:02.948347 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:26:02.950635 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:26:02.954677 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:26:02.958819 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:26:02.961463 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:26:02.963147 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:26:02.969066 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:26:02.970853 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:26:02.973131 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:26:02.975163 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:26:02.976326 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:26:02.977689 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:26:02.977720 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:26:02.978810 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:26:02.980199 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:26:02.983693 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:26:02.986827 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:26:02.986908 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:26:02.990750 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:26:02.996559 systemd-networkd[1470]: lo: Link UP Sep 9 00:26:02.996579 systemd-networkd[1470]: lo: Gained carrier Sep 9 00:26:02.998500 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:26:03.002579 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:26:03.003817 systemd-networkd[1470]: Enumeration completed Sep 9 00:26:03.008799 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:26:03.008807 systemd-networkd[1470]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:26:03.010219 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:26:03.013002 systemd-networkd[1470]: eth0: Link UP Sep 9 00:26:03.013208 systemd-networkd[1470]: eth0: Gained carrier Sep 9 00:26:03.013237 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:26:03.014908 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:26:03.021111 jq[1509]: false Sep 9 00:26:03.022856 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Refreshing passwd entry cache Sep 9 00:26:03.023152 oslogin_cache_refresh[1511]: Refreshing passwd entry cache Sep 9 00:26:03.023782 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:26:03.025147 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:26:03.025811 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:26:03.027281 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:26:03.027704 oslogin_cache_refresh[1511]: Failure getting users, quitting Sep 9 00:26:03.028348 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Failure getting users, quitting Sep 9 00:26:03.028348 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:26:03.028348 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Refreshing group entry cache Sep 9 00:26:03.028348 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Failure getting groups, quitting Sep 9 00:26:03.028348 google_oslogin_nss_cache[1511]: oslogin_cache_refresh[1511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:26:03.027720 oslogin_cache_refresh[1511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:26:03.027767 oslogin_cache_refresh[1511]: Refreshing group entry cache Sep 9 00:26:03.028201 oslogin_cache_refresh[1511]: Failure getting groups, quitting Sep 9 00:26:03.028210 oslogin_cache_refresh[1511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:26:03.029722 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:26:03.036643 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:26:03.036630 systemd-networkd[1470]: eth0: DHCPv4 address 10.0.0.40/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:26:03.037388 systemd-timesyncd[1428]: Network configuration changed, trying to establish connection. Sep 9 00:26:03.697330 systemd-timesyncd[1428]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:26:03.697398 systemd-timesyncd[1428]: Initial clock synchronization to Tue 2025-09-09 00:26:03.697244 UTC. Sep 9 00:26:03.697434 systemd-resolved[1425]: Clock change detected. Flushing caches. Sep 9 00:26:03.701285 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:26:03.703574 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:26:03.706020 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:26:03.706140 extend-filesystems[1510]: Found /dev/vda6 Sep 9 00:26:03.709326 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:26:03.707902 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:26:03.708175 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:26:03.708527 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:26:03.709780 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:26:03.715584 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:26:03.715864 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:26:03.717695 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:26:03.717935 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:26:03.718533 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:26:03.733626 jq[1526]: true Sep 9 00:26:03.735377 extend-filesystems[1510]: Found /dev/vda9 Sep 9 00:26:03.737433 update_engine[1525]: I20250909 00:26:03.737344 1525 main.cc:92] Flatcar Update Engine starting Sep 9 00:26:03.741532 extend-filesystems[1510]: Checking size of /dev/vda9 Sep 9 00:26:03.780081 systemd[1]: Reached target network.target - Network. Sep 9 00:26:03.786441 jq[1539]: true Sep 9 00:26:03.792186 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:26:03.797408 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:26:03.799236 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:26:03.799403 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:26:03.799670 extend-filesystems[1510]: Resized partition /dev/vda9 Sep 9 00:26:03.805010 extend-filesystems[1562]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:26:03.803767 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:26:03.807451 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:26:03.815440 tar[1528]: linux-amd64/LICENSE Sep 9 00:26:03.815440 tar[1528]: linux-amd64/helm Sep 9 00:26:03.818529 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:26:03.856447 systemd-logind[1517]: New seat seat0. Sep 9 00:26:03.857496 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:26:03.859892 dbus-daemon[1507]: [system] SELinux support is enabled Sep 9 00:26:03.860071 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:26:03.864145 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:26:03.864181 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:26:03.865996 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:26:03.866016 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:26:03.876983 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:26:03.911028 update_engine[1525]: I20250909 00:26:03.888652 1525 update_check_scheduler.cc:74] Next update check in 8m11s Sep 9 00:26:03.884103 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:26:03.879870 dbus-daemon[1507]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 00:26:03.885099 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:26:03.888879 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:26:03.912835 extend-filesystems[1562]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:26:03.912835 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:26:03.912835 extend-filesystems[1562]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:26:03.890618 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:26:03.922954 extend-filesystems[1510]: Resized filesystem in /dev/vda9 Sep 9 00:26:03.919539 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:26:03.920651 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:26:03.954993 bash[1586]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:26:03.958032 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:26:03.962432 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:26:04.216084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:26:04.225171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:26:04.225471 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:26:04.262922 kernel: kvm_amd: TSC scaling supported Sep 9 00:26:04.263049 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:26:04.263066 kernel: kvm_amd: Nested Paging enabled Sep 9 00:26:04.263078 kernel: kvm_amd: LBR virtualization supported Sep 9 00:26:04.263090 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:26:04.263103 kernel: kvm_amd: Virtual GIF supported Sep 9 00:26:04.302080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:26:04.316370 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:26:04.359645 systemd-logind[1517]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:26:04.362983 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:26:04.410352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:26:04.443313 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:26:04.449225 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:26:04.458796 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:26:04.476543 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:26:04.489905 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:26:04.490450 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:26:04.494771 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:26:04.520806 containerd[1569]: time="2025-09-09T00:26:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:26:04.522131 containerd[1569]: time="2025-09-09T00:26:04.522067164Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 00:26:04.529909 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:26:04.534086 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:26:04.536956 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:26:04.538416 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:26:04.543485 containerd[1569]: time="2025-09-09T00:26:04.543383782Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.059µs" Sep 9 00:26:04.543485 containerd[1569]: time="2025-09-09T00:26:04.543434377Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:26:04.543485 containerd[1569]: time="2025-09-09T00:26:04.543461999Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:26:04.543739 containerd[1569]: time="2025-09-09T00:26:04.543704874Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:26:04.543739 containerd[1569]: time="2025-09-09T00:26:04.543722547Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:26:04.543784 containerd[1569]: time="2025-09-09T00:26:04.543757824Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:26:04.543852 containerd[1569]: time="2025-09-09T00:26:04.543829077Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:26:04.543852 containerd[1569]: time="2025-09-09T00:26:04.543845318Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:26:04.544240 containerd[1569]: time="2025-09-09T00:26:04.544191948Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:26:04.544240 containerd[1569]: time="2025-09-09T00:26:04.544218808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:26:04.544240 containerd[1569]: time="2025-09-09T00:26:04.544230720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:26:04.544240 containerd[1569]: time="2025-09-09T00:26:04.544240378Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:26:04.544367 containerd[1569]: time="2025-09-09T00:26:04.544343582Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:26:04.545529 containerd[1569]: time="2025-09-09T00:26:04.544639647Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:26:04.545529 containerd[1569]: time="2025-09-09T00:26:04.544678851Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:26:04.545529 containerd[1569]: time="2025-09-09T00:26:04.544689741Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:26:04.545529 containerd[1569]: time="2025-09-09T00:26:04.544722623Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:26:04.545529 containerd[1569]: time="2025-09-09T00:26:04.544953576Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:26:04.545529 containerd[1569]: time="2025-09-09T00:26:04.545019349Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:26:04.551862 containerd[1569]: time="2025-09-09T00:26:04.551806485Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:26:04.551983 containerd[1569]: time="2025-09-09T00:26:04.551883589Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:26:04.551983 containerd[1569]: time="2025-09-09T00:26:04.551900391Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:26:04.551983 containerd[1569]: time="2025-09-09T00:26:04.551912233Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552000148Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552011860Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552023852Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552040103Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552050803Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552061172Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552070740Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:26:04.552090 containerd[1569]: time="2025-09-09T00:26:04.552082462Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:26:04.552241 containerd[1569]: time="2025-09-09T00:26:04.552204862Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:26:04.552241 containerd[1569]: time="2025-09-09T00:26:04.552227214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:26:04.552279 containerd[1569]: time="2025-09-09T00:26:04.552241831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:26:04.552279 containerd[1569]: time="2025-09-09T00:26:04.552252882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:26:04.552279 containerd[1569]: time="2025-09-09T00:26:04.552262810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:26:04.552279 containerd[1569]: time="2025-09-09T00:26:04.552272499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:26:04.552358 containerd[1569]: time="2025-09-09T00:26:04.552282808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:26:04.552358 containerd[1569]: time="2025-09-09T00:26:04.552293698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:26:04.552358 containerd[1569]: time="2025-09-09T00:26:04.552303537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:26:04.552358 containerd[1569]: time="2025-09-09T00:26:04.552313515Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:26:04.552358 containerd[1569]: time="2025-09-09T00:26:04.552323594Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:26:04.552449 containerd[1569]: time="2025-09-09T00:26:04.552418783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:26:04.552449 containerd[1569]: time="2025-09-09T00:26:04.552432919Z" level=info msg="Start snapshots syncer" Sep 9 00:26:04.552528 containerd[1569]: time="2025-09-09T00:26:04.552475289Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:26:04.552886 containerd[1569]: time="2025-09-09T00:26:04.552829934Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:26:04.553008 containerd[1569]: time="2025-09-09T00:26:04.552890888Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:26:04.554446 containerd[1569]: time="2025-09-09T00:26:04.554401270Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:26:04.554621 containerd[1569]: time="2025-09-09T00:26:04.554589974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:26:04.554621 containerd[1569]: time="2025-09-09T00:26:04.554616484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:26:04.554707 containerd[1569]: time="2025-09-09T00:26:04.554626212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:26:04.554707 containerd[1569]: time="2025-09-09T00:26:04.554638274Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:26:04.554707 containerd[1569]: time="2025-09-09T00:26:04.554655046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:26:04.554707 containerd[1569]: time="2025-09-09T00:26:04.554666698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:26:04.554707 containerd[1569]: time="2025-09-09T00:26:04.554678149Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:26:04.554707 containerd[1569]: time="2025-09-09T00:26:04.554701784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:26:04.554707 containerd[1569]: time="2025-09-09T00:26:04.554712303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554723103Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554762237Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554774981Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554783306Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554808353Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554816048Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554824604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:26:04.554840 containerd[1569]: time="2025-09-09T00:26:04.554833911Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:26:04.555009 containerd[1569]: time="2025-09-09T00:26:04.554859539Z" level=info msg="runtime interface created" Sep 9 00:26:04.555009 containerd[1569]: time="2025-09-09T00:26:04.554865921Z" level=info msg="created NRI interface" Sep 9 00:26:04.555009 containerd[1569]: time="2025-09-09T00:26:04.554877703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:26:04.555009 containerd[1569]: time="2025-09-09T00:26:04.554892401Z" level=info msg="Connect containerd service" Sep 9 00:26:04.555009 containerd[1569]: time="2025-09-09T00:26:04.554932135Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:26:04.555943 containerd[1569]: time="2025-09-09T00:26:04.555894039Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:26:04.643566 tar[1528]: linux-amd64/README.md Sep 9 00:26:04.672175 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:26:04.756151 containerd[1569]: time="2025-09-09T00:26:04.755937350Z" level=info msg="Start subscribing containerd event" Sep 9 00:26:04.756151 containerd[1569]: time="2025-09-09T00:26:04.756040433Z" level=info msg="Start recovering state" Sep 9 00:26:04.756293 containerd[1569]: time="2025-09-09T00:26:04.756214440Z" level=info msg="Start event monitor" Sep 9 00:26:04.756293 containerd[1569]: time="2025-09-09T00:26:04.756244957Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:26:04.756293 containerd[1569]: time="2025-09-09T00:26:04.756212997Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:26:04.756349 containerd[1569]: time="2025-09-09T00:26:04.756255406Z" level=info msg="Start streaming server" Sep 9 00:26:04.756349 containerd[1569]: time="2025-09-09T00:26:04.756331770Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:26:04.756534 containerd[1569]: time="2025-09-09T00:26:04.756338242Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:26:04.756534 containerd[1569]: time="2025-09-09T00:26:04.756525593Z" level=info msg="runtime interface starting up..." Sep 9 00:26:04.756579 containerd[1569]: time="2025-09-09T00:26:04.756540361Z" level=info msg="starting plugins..." Sep 9 00:26:04.756600 containerd[1569]: time="2025-09-09T00:26:04.756576809Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:26:04.756786 containerd[1569]: time="2025-09-09T00:26:04.756755695Z" level=info msg="containerd successfully booted in 0.236542s" Sep 9 00:26:04.756923 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:26:05.647151 systemd-networkd[1470]: eth0: Gained IPv6LL Sep 9 00:26:05.651582 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:26:05.653663 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:26:05.658005 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:26:05.662043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:05.673632 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:26:05.711592 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:26:05.711999 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:26:05.714484 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:26:05.717214 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:26:06.987209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:07.017081 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:26:07.018625 systemd[1]: Startup finished in 3.652s (kernel) + 6.554s (initrd) + 7.197s (userspace) = 17.405s. Sep 9 00:26:07.022062 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:26:07.361929 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:26:07.363182 systemd[1]: Started sshd@0-10.0.0.40:22-10.0.0.1:45424.service - OpenSSH per-connection server daemon (10.0.0.1:45424). Sep 9 00:26:07.455779 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 45424 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:26:07.457856 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:26:07.465259 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:26:07.466472 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:26:07.474264 systemd-logind[1517]: New session 1 of user core. Sep 9 00:26:07.488970 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:26:07.492064 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:26:07.512400 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:26:07.516041 systemd-logind[1517]: New session c1 of user core. Sep 9 00:26:07.698290 systemd[1689]: Queued start job for default target default.target. Sep 9 00:26:07.719169 systemd[1689]: Created slice app.slice - User Application Slice. Sep 9 00:26:07.719202 systemd[1689]: Reached target paths.target - Paths. Sep 9 00:26:07.719250 systemd[1689]: Reached target timers.target - Timers. Sep 9 00:26:07.721051 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:26:07.736906 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:26:07.737089 systemd[1689]: Reached target sockets.target - Sockets. Sep 9 00:26:07.737157 systemd[1689]: Reached target basic.target - Basic System. Sep 9 00:26:07.737206 systemd[1689]: Reached target default.target - Main User Target. Sep 9 00:26:07.737249 systemd[1689]: Startup finished in 213ms. Sep 9 00:26:07.737397 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:26:07.740223 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:26:07.813729 systemd[1]: Started sshd@1-10.0.0.40:22-10.0.0.1:45432.service - OpenSSH per-connection server daemon (10.0.0.1:45432). Sep 9 00:26:07.884407 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 45432 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:26:07.886229 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:26:07.894203 systemd-logind[1517]: New session 2 of user core. Sep 9 00:26:07.940699 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:26:07.996055 sshd[1704]: Connection closed by 10.0.0.1 port 45432 Sep 9 00:26:07.997483 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:08.010537 systemd[1]: sshd@1-10.0.0.40:22-10.0.0.1:45432.service: Deactivated successfully. Sep 9 00:26:08.012647 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:26:08.013501 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:26:08.017105 systemd[1]: Started sshd@2-10.0.0.40:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Sep 9 00:26:08.017914 systemd-logind[1517]: Removed session 2. Sep 9 00:26:08.087919 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:26:08.089634 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:26:08.095562 systemd-logind[1517]: New session 3 of user core. Sep 9 00:26:08.101747 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:26:08.153797 sshd[1713]: Connection closed by 10.0.0.1 port 45434 Sep 9 00:26:08.154291 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:08.161793 kubelet[1673]: E0909 00:26:08.161715 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:26:08.163623 systemd[1]: sshd@2-10.0.0.40:22-10.0.0.1:45434.service: Deactivated successfully. Sep 9 00:26:08.165613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:26:08.165794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:26:08.166164 systemd[1]: kubelet.service: Consumed 1.724s CPU time, 267.3M memory peak. Sep 9 00:26:08.166652 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:26:08.167385 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:26:08.171526 systemd[1]: Started sshd@3-10.0.0.40:22-10.0.0.1:45440.service - OpenSSH per-connection server daemon (10.0.0.1:45440). Sep 9 00:26:08.172163 systemd-logind[1517]: Removed session 3. Sep 9 00:26:08.228021 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 45440 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:26:08.229889 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:26:08.236828 systemd-logind[1517]: New session 4 of user core. Sep 9 00:26:08.246685 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:26:08.323166 sshd[1723]: Connection closed by 10.0.0.1 port 45440 Sep 9 00:26:08.323817 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:08.335008 systemd[1]: sshd@3-10.0.0.40:22-10.0.0.1:45440.service: Deactivated successfully. Sep 9 00:26:08.339101 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:26:08.340326 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:26:08.344698 systemd[1]: Started sshd@4-10.0.0.40:22-10.0.0.1:45456.service - OpenSSH per-connection server daemon (10.0.0.1:45456). Sep 9 00:26:08.345382 systemd-logind[1517]: Removed session 4. Sep 9 00:26:08.404416 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 45456 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:26:08.406369 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:26:08.412462 systemd-logind[1517]: New session 5 of user core. Sep 9 00:26:08.419708 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:26:08.562342 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:26:08.562764 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:26:08.583625 sudo[1733]: pam_unix(sudo:session): session closed for user root Sep 9 00:26:08.586189 sshd[1732]: Connection closed by 10.0.0.1 port 45456 Sep 9 00:26:08.586770 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:08.602735 systemd[1]: sshd@4-10.0.0.40:22-10.0.0.1:45456.service: Deactivated successfully. Sep 9 00:26:08.605859 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:26:08.607913 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:26:08.613286 systemd[1]: Started sshd@5-10.0.0.40:22-10.0.0.1:45464.service - OpenSSH per-connection server daemon (10.0.0.1:45464). Sep 9 00:26:08.614772 systemd-logind[1517]: Removed session 5. Sep 9 00:26:08.698681 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 45464 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:26:08.700754 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:26:08.706172 systemd-logind[1517]: New session 6 of user core. Sep 9 00:26:08.720676 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:26:08.781037 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:26:08.781370 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:26:08.793398 sudo[1744]: pam_unix(sudo:session): session closed for user root Sep 9 00:26:08.801575 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:26:08.801910 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:26:08.816550 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:26:08.887369 augenrules[1766]: No rules Sep 9 00:26:08.889965 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:26:08.890376 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:26:08.892481 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 9 00:26:08.895085 sshd[1742]: Connection closed by 10.0.0.1 port 45464 Sep 9 00:26:08.895783 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 9 00:26:08.912165 systemd[1]: sshd@5-10.0.0.40:22-10.0.0.1:45464.service: Deactivated successfully. Sep 9 00:26:08.914732 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:26:08.915884 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:26:08.921530 systemd[1]: Started sshd@6-10.0.0.40:22-10.0.0.1:45478.service - OpenSSH per-connection server daemon (10.0.0.1:45478). Sep 9 00:26:08.922697 systemd-logind[1517]: Removed session 6. Sep 9 00:26:08.999855 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 45478 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:26:09.002604 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:26:09.012083 systemd-logind[1517]: New session 7 of user core. Sep 9 00:26:09.021878 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:26:09.079409 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:26:09.079784 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:26:09.800395 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:26:09.818093 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:26:10.434075 dockerd[1800]: time="2025-09-09T00:26:10.433947848Z" level=info msg="Starting up" Sep 9 00:26:10.435143 dockerd[1800]: time="2025-09-09T00:26:10.435108745Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:26:10.458662 dockerd[1800]: time="2025-09-09T00:26:10.458557010Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 00:26:12.452644 dockerd[1800]: time="2025-09-09T00:26:12.452543575Z" level=info msg="Loading containers: start." Sep 9 00:26:12.757541 kernel: Initializing XFRM netlink socket Sep 9 00:26:13.375052 systemd-networkd[1470]: docker0: Link UP Sep 9 00:26:13.431879 dockerd[1800]: time="2025-09-09T00:26:13.431801954Z" level=info msg="Loading containers: done." Sep 9 00:26:13.452597 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1976200914-merged.mount: Deactivated successfully. Sep 9 00:26:13.454688 dockerd[1800]: time="2025-09-09T00:26:13.454599900Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:26:13.455077 dockerd[1800]: time="2025-09-09T00:26:13.454747868Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 00:26:13.455077 dockerd[1800]: time="2025-09-09T00:26:13.454914871Z" level=info msg="Initializing buildkit" Sep 9 00:26:13.502139 dockerd[1800]: time="2025-09-09T00:26:13.502038768Z" level=info msg="Completed buildkit initialization" Sep 9 00:26:13.508576 dockerd[1800]: time="2025-09-09T00:26:13.508461200Z" level=info msg="Daemon has completed initialization" Sep 9 00:26:13.508940 dockerd[1800]: time="2025-09-09T00:26:13.508850229Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:26:13.508955 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:26:14.852529 containerd[1569]: time="2025-09-09T00:26:14.852449207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:26:16.141039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979284256.mount: Deactivated successfully. Sep 9 00:26:18.153240 containerd[1569]: time="2025-09-09T00:26:18.153156753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:18.154352 containerd[1569]: time="2025-09-09T00:26:18.154314644Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:26:18.155646 containerd[1569]: time="2025-09-09T00:26:18.155601998Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:18.158344 containerd[1569]: time="2025-09-09T00:26:18.158307020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:18.159556 containerd[1569]: time="2025-09-09T00:26:18.159500127Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 3.306997549s" Sep 9 00:26:18.159612 containerd[1569]: time="2025-09-09T00:26:18.159561261Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:26:18.160292 containerd[1569]: time="2025-09-09T00:26:18.160267646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:26:18.286009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:26:18.287929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:18.561195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:18.579021 (kubelet)[2083]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:26:19.044588 kubelet[2083]: E0909 00:26:19.044482 2083 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:26:19.053406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:26:19.053663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:26:19.054149 systemd[1]: kubelet.service: Consumed 308ms CPU time, 111.1M memory peak. Sep 9 00:26:23.381608 containerd[1569]: time="2025-09-09T00:26:23.381491279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:23.437544 containerd[1569]: time="2025-09-09T00:26:23.437422700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:26:23.458472 containerd[1569]: time="2025-09-09T00:26:23.458386187Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:23.535591 containerd[1569]: time="2025-09-09T00:26:23.535523960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:23.536778 containerd[1569]: time="2025-09-09T00:26:23.536739108Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 5.376441076s" Sep 9 00:26:23.536842 containerd[1569]: time="2025-09-09T00:26:23.536779544Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:26:23.537479 containerd[1569]: time="2025-09-09T00:26:23.537453257Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:26:26.789209 containerd[1569]: time="2025-09-09T00:26:26.789119731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:26.801414 containerd[1569]: time="2025-09-09T00:26:26.801299108Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:26:26.816029 containerd[1569]: time="2025-09-09T00:26:26.815923278Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:26.831767 containerd[1569]: time="2025-09-09T00:26:26.831700771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:26.832981 containerd[1569]: time="2025-09-09T00:26:26.832907403Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 3.295409041s" Sep 9 00:26:26.833076 containerd[1569]: time="2025-09-09T00:26:26.833050371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:26:26.833671 containerd[1569]: time="2025-09-09T00:26:26.833617545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:26:28.438654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797981907.mount: Deactivated successfully. Sep 9 00:26:29.286023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:26:29.287914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:29.619973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:29.650021 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:26:30.159895 kubelet[2115]: E0909 00:26:30.159792 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:26:30.165395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:26:30.165666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:26:30.166111 systemd[1]: kubelet.service: Consumed 491ms CPU time, 109.1M memory peak. Sep 9 00:26:30.401734 containerd[1569]: time="2025-09-09T00:26:30.401619507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:30.421462 containerd[1569]: time="2025-09-09T00:26:30.421215440Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:26:30.423635 containerd[1569]: time="2025-09-09T00:26:30.423580975Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:30.428156 containerd[1569]: time="2025-09-09T00:26:30.428074851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:30.428754 containerd[1569]: time="2025-09-09T00:26:30.428712397Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 3.595062581s" Sep 9 00:26:30.428754 containerd[1569]: time="2025-09-09T00:26:30.428748504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:26:30.429428 containerd[1569]: time="2025-09-09T00:26:30.429378365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:26:31.154545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927965257.mount: Deactivated successfully. Sep 9 00:26:32.375242 containerd[1569]: time="2025-09-09T00:26:32.375144697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:32.376203 containerd[1569]: time="2025-09-09T00:26:32.376161984Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:26:32.377922 containerd[1569]: time="2025-09-09T00:26:32.377874655Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:32.381220 containerd[1569]: time="2025-09-09T00:26:32.381177368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:32.382305 containerd[1569]: time="2025-09-09T00:26:32.382269726Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.952858188s" Sep 9 00:26:32.382364 containerd[1569]: time="2025-09-09T00:26:32.382313167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:26:32.383019 containerd[1569]: time="2025-09-09T00:26:32.382835837Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:26:36.866785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215666043.mount: Deactivated successfully. Sep 9 00:26:37.550047 containerd[1569]: time="2025-09-09T00:26:37.549921993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:26:37.569804 containerd[1569]: time="2025-09-09T00:26:37.569715442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:26:37.681238 containerd[1569]: time="2025-09-09T00:26:37.681174122Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:26:37.739444 containerd[1569]: time="2025-09-09T00:26:37.739350320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:26:37.740112 containerd[1569]: time="2025-09-09T00:26:37.740070743Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 5.357203396s" Sep 9 00:26:37.740195 containerd[1569]: time="2025-09-09T00:26:37.740117142Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:26:37.740782 containerd[1569]: time="2025-09-09T00:26:37.740749977Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:26:39.042490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408211118.mount: Deactivated successfully. Sep 9 00:26:40.286017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:26:40.287987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:41.246182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:41.267852 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:26:41.450185 kubelet[2204]: E0909 00:26:41.450091 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:26:41.455028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:26:41.455281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:26:41.455800 systemd[1]: kubelet.service: Consumed 259ms CPU time, 110.6M memory peak. Sep 9 00:26:43.402846 containerd[1569]: time="2025-09-09T00:26:43.402719793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:43.403537 containerd[1569]: time="2025-09-09T00:26:43.403443682Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:26:43.405282 containerd[1569]: time="2025-09-09T00:26:43.405137769Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:43.408551 containerd[1569]: time="2025-09-09T00:26:43.408466622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:26:43.410014 containerd[1569]: time="2025-09-09T00:26:43.409963844Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.669182337s" Sep 9 00:26:43.410014 containerd[1569]: time="2025-09-09T00:26:43.410014721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:26:46.538546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:46.538763 systemd[1]: kubelet.service: Consumed 259ms CPU time, 110.6M memory peak. Sep 9 00:26:46.541455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:46.571341 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-7.scope)... Sep 9 00:26:46.571367 systemd[1]: Reloading... Sep 9 00:26:46.714583 zram_generator::config[2327]: No configuration found. Sep 9 00:26:47.539879 systemd[1]: Reloading finished in 968 ms. Sep 9 00:26:47.608403 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:26:47.608570 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:26:47.608983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:47.609047 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.4M memory peak. Sep 9 00:26:47.610951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:47.802637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:47.807719 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:26:47.853799 kubelet[2375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:26:47.853799 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:26:47.853799 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:26:47.854257 kubelet[2375]: I0909 00:26:47.853835 2375 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:26:49.484751 kubelet[2375]: I0909 00:26:49.484679 2375 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:26:49.484751 kubelet[2375]: I0909 00:26:49.484723 2375 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:26:49.485566 kubelet[2375]: I0909 00:26:49.485539 2375 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:26:49.521413 kubelet[2375]: E0909 00:26:49.521356 2375 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:26:49.521813 kubelet[2375]: I0909 00:26:49.521789 2375 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:26:49.529560 kubelet[2375]: I0909 00:26:49.529536 2375 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:26:49.535332 kubelet[2375]: I0909 00:26:49.535305 2375 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:26:49.535577 kubelet[2375]: I0909 00:26:49.535542 2375 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:26:49.535721 kubelet[2375]: I0909 00:26:49.535563 2375 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:26:49.535721 kubelet[2375]: I0909 00:26:49.535720 2375 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:26:49.535899 kubelet[2375]: I0909 00:26:49.535728 2375 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:26:49.537010 kubelet[2375]: I0909 00:26:49.536919 2375 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:26:49.540369 kubelet[2375]: I0909 00:26:49.540333 2375 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:26:49.540369 kubelet[2375]: I0909 00:26:49.540359 2375 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:26:49.542245 kubelet[2375]: I0909 00:26:49.542221 2375 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:26:49.542245 kubelet[2375]: I0909 00:26:49.542245 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:26:49.542420 kubelet[2375]: E0909 00:26:49.542362 2375 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:26:49.542805 kubelet[2375]: E0909 00:26:49.542752 2375 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:26:49.545527 kubelet[2375]: I0909 00:26:49.545470 2375 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:26:49.545930 kubelet[2375]: I0909 00:26:49.545898 2375 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:26:49.546995 kubelet[2375]: W0909 00:26:49.546964 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:26:49.549931 kubelet[2375]: I0909 00:26:49.549903 2375 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:26:49.549995 kubelet[2375]: I0909 00:26:49.549966 2375 server.go:1289] "Started kubelet" Sep 9 00:26:49.550344 kubelet[2375]: I0909 00:26:49.550290 2375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:26:49.551828 kubelet[2375]: I0909 00:26:49.551093 2375 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:26:49.551828 kubelet[2375]: I0909 00:26:49.551177 2375 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:26:49.552049 kubelet[2375]: I0909 00:26:49.552031 2375 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:26:49.553234 kubelet[2375]: E0909 00:26:49.553193 2375 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:26:49.553377 kubelet[2375]: I0909 00:26:49.553335 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:26:49.555034 kubelet[2375]: I0909 00:26:49.553820 2375 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:26:49.555034 kubelet[2375]: E0909 00:26:49.553695 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.40:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.40:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863759ff972ceb2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:26:49.549926066 +0000 UTC m=+1.734895773,LastTimestamp:2025-09-09 00:26:49.549926066 +0000 UTC m=+1.734895773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:26:49.555857 kubelet[2375]: E0909 00:26:49.555810 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:26:49.555857 kubelet[2375]: I0909 00:26:49.555860 2375 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:26:49.556045 kubelet[2375]: I0909 00:26:49.556027 2375 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:26:49.556094 kubelet[2375]: I0909 00:26:49.556086 2375 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:26:49.556500 kubelet[2375]: E0909 00:26:49.556440 2375 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:26:49.556576 kubelet[2375]: E0909 00:26:49.556523 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="200ms" Sep 9 00:26:49.556740 kubelet[2375]: I0909 00:26:49.556719 2375 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:26:49.556859 kubelet[2375]: I0909 00:26:49.556838 2375 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:26:49.557904 kubelet[2375]: I0909 00:26:49.557884 2375 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:26:49.576461 kubelet[2375]: I0909 00:26:49.576434 2375 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:26:49.576461 kubelet[2375]: I0909 00:26:49.576452 2375 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:26:49.576644 kubelet[2375]: I0909 00:26:49.576475 2375 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:26:49.579018 kubelet[2375]: I0909 00:26:49.578962 2375 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:26:49.579651 kubelet[2375]: I0909 00:26:49.579629 2375 policy_none.go:49] "None policy: Start" Sep 9 00:26:49.579651 kubelet[2375]: I0909 00:26:49.579651 2375 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:26:49.579725 kubelet[2375]: I0909 00:26:49.579665 2375 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:26:49.580483 kubelet[2375]: I0909 00:26:49.580455 2375 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:26:49.580576 kubelet[2375]: I0909 00:26:49.580492 2375 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:26:49.580576 kubelet[2375]: I0909 00:26:49.580547 2375 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:26:49.580576 kubelet[2375]: I0909 00:26:49.580561 2375 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:26:49.580706 kubelet[2375]: E0909 00:26:49.580608 2375 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:26:49.582023 kubelet[2375]: E0909 00:26:49.581310 2375 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:26:49.588427 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:26:49.605110 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:26:49.609097 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:26:49.613787 update_engine[1525]: I20250909 00:26:49.613662 1525 update_attempter.cc:509] Updating boot flags... Sep 9 00:26:49.624916 kubelet[2375]: E0909 00:26:49.624869 2375 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:26:49.625733 kubelet[2375]: I0909 00:26:49.625114 2375 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:26:49.625733 kubelet[2375]: I0909 00:26:49.625134 2375 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:26:49.625849 kubelet[2375]: I0909 00:26:49.625806 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:26:49.628075 kubelet[2375]: E0909 00:26:49.628043 2375 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:26:49.628156 kubelet[2375]: E0909 00:26:49.628106 2375 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:26:49.731790 kubelet[2375]: I0909 00:26:49.730022 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:26:49.731790 kubelet[2375]: E0909 00:26:49.730364 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Sep 9 00:26:49.757623 kubelet[2375]: E0909 00:26:49.757477 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="400ms" Sep 9 00:26:49.758600 kubelet[2375]: I0909 00:26:49.758433 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae0056a3ee2f21f385d60082831f6042-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0056a3ee2f21f385d60082831f6042\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:49.758600 kubelet[2375]: I0909 00:26:49.758481 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae0056a3ee2f21f385d60082831f6042-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0056a3ee2f21f385d60082831f6042\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:49.758600 kubelet[2375]: I0909 00:26:49.758534 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae0056a3ee2f21f385d60082831f6042-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae0056a3ee2f21f385d60082831f6042\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:49.758600 kubelet[2375]: I0909 00:26:49.758565 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:49.758600 kubelet[2375]: I0909 00:26:49.758588 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:49.758909 kubelet[2375]: I0909 00:26:49.758609 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:49.758909 kubelet[2375]: I0909 00:26:49.758632 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:49.758909 kubelet[2375]: I0909 00:26:49.758653 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:49.758909 kubelet[2375]: I0909 00:26:49.758676 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:49.759087 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:26:49.820356 kubelet[2375]: E0909 00:26:49.820304 2375 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:26:49.824655 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:26:49.834583 kubelet[2375]: E0909 00:26:49.834501 2375 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:26:49.838000 systemd[1]: Created slice kubepods-burstable-podae0056a3ee2f21f385d60082831f6042.slice - libcontainer container kubepods-burstable-podae0056a3ee2f21f385d60082831f6042.slice. Sep 9 00:26:49.840357 kubelet[2375]: E0909 00:26:49.840306 2375 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:26:49.932278 kubelet[2375]: I0909 00:26:49.932242 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:26:49.932732 kubelet[2375]: E0909 00:26:49.932685 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Sep 9 00:26:50.125051 kubelet[2375]: E0909 00:26:50.124976 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:50.125889 containerd[1569]: time="2025-09-09T00:26:50.125823316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:26:50.135187 kubelet[2375]: E0909 00:26:50.135126 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:50.135734 containerd[1569]: time="2025-09-09T00:26:50.135691148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:26:50.141156 kubelet[2375]: E0909 00:26:50.141105 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:50.141749 containerd[1569]: time="2025-09-09T00:26:50.141710679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae0056a3ee2f21f385d60082831f6042,Namespace:kube-system,Attempt:0,}" Sep 9 00:26:50.158829 kubelet[2375]: E0909 00:26:50.158750 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.40:6443: connect: connection refused" interval="800ms" Sep 9 00:26:50.202612 containerd[1569]: time="2025-09-09T00:26:50.202551882Z" level=info msg="connecting to shim 8aeeecb93f4de7a44114696f69155e90bb2c9cdf2a1fb0a9d446170a8d1fc3ed" address="unix:///run/containerd/s/903e8a574f3dd0dc8461a7c7a53e68d08d70ce76f05728344f9d84078cd87910" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:26:50.227536 containerd[1569]: time="2025-09-09T00:26:50.226594632Z" level=info msg="connecting to shim 924d992cc0a1881234406b6591de840b8f445bb907a4bd376fa6800951d1d431" address="unix:///run/containerd/s/087b439e29a6b26c29180c797aa939f11371b046ca19304cd765c140e5e84420" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:26:50.244393 containerd[1569]: time="2025-09-09T00:26:50.244345145Z" level=info msg="connecting to shim 3df267779652e155f2aa66f2f1677d67361b342885dadf7b27040069f1a91c3a" address="unix:///run/containerd/s/8f0e4e679b5137fed4342fddb7f8afc19d3ff8e407db701d0594aeca0ffb84b6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:26:50.255009 systemd[1]: Started cri-containerd-8aeeecb93f4de7a44114696f69155e90bb2c9cdf2a1fb0a9d446170a8d1fc3ed.scope - libcontainer container 8aeeecb93f4de7a44114696f69155e90bb2c9cdf2a1fb0a9d446170a8d1fc3ed. Sep 9 00:26:50.276774 systemd[1]: Started cri-containerd-924d992cc0a1881234406b6591de840b8f445bb907a4bd376fa6800951d1d431.scope - libcontainer container 924d992cc0a1881234406b6591de840b8f445bb907a4bd376fa6800951d1d431. Sep 9 00:26:50.288658 systemd[1]: Started cri-containerd-3df267779652e155f2aa66f2f1677d67361b342885dadf7b27040069f1a91c3a.scope - libcontainer container 3df267779652e155f2aa66f2f1677d67361b342885dadf7b27040069f1a91c3a. Sep 9 00:26:50.338438 kubelet[2375]: I0909 00:26:50.338403 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:26:50.339303 kubelet[2375]: E0909 00:26:50.339278 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.40:6443/api/v1/nodes\": dial tcp 10.0.0.40:6443: connect: connection refused" node="localhost" Sep 9 00:26:50.373019 containerd[1569]: time="2025-09-09T00:26:50.372960230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aeeecb93f4de7a44114696f69155e90bb2c9cdf2a1fb0a9d446170a8d1fc3ed\"" Sep 9 00:26:50.386648 kubelet[2375]: E0909 00:26:50.374343 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:50.588956 containerd[1569]: time="2025-09-09T00:26:50.588894671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae0056a3ee2f21f385d60082831f6042,Namespace:kube-system,Attempt:0,} returns sandbox id \"3df267779652e155f2aa66f2f1677d67361b342885dadf7b27040069f1a91c3a\"" Sep 9 00:26:50.589452 kubelet[2375]: E0909 00:26:50.589425 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:50.666648 containerd[1569]: time="2025-09-09T00:26:50.666492092Z" level=info msg="CreateContainer within sandbox \"8aeeecb93f4de7a44114696f69155e90bb2c9cdf2a1fb0a9d446170a8d1fc3ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:26:50.667528 containerd[1569]: time="2025-09-09T00:26:50.667475876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"924d992cc0a1881234406b6591de840b8f445bb907a4bd376fa6800951d1d431\"" Sep 9 00:26:50.668182 kubelet[2375]: E0909 00:26:50.668145 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:50.671307 containerd[1569]: time="2025-09-09T00:26:50.671261837Z" level=info msg="CreateContainer within sandbox \"3df267779652e155f2aa66f2f1677d67361b342885dadf7b27040069f1a91c3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:26:50.679031 containerd[1569]: time="2025-09-09T00:26:50.678856301Z" level=info msg="CreateContainer within sandbox \"924d992cc0a1881234406b6591de840b8f445bb907a4bd376fa6800951d1d431\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:26:50.689427 containerd[1569]: time="2025-09-09T00:26:50.689357693Z" level=info msg="Container d915e730b08a93735fa04b5da379093220b2824c7049dc613a17302eae6c173c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:26:50.695528 containerd[1569]: time="2025-09-09T00:26:50.695446536Z" level=info msg="Container 1fc25e23100f4557a2b36fa4273760ce8b2795c2490852ecaebfdfc4d5005c54: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:26:50.700066 containerd[1569]: time="2025-09-09T00:26:50.699995012Z" level=info msg="Container 172236cd7d019231550020ecc1eb3d08e5891aedcfe1366ec1540bbacb182b82: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:26:50.709061 containerd[1569]: time="2025-09-09T00:26:50.708982866Z" level=info msg="CreateContainer within sandbox \"8aeeecb93f4de7a44114696f69155e90bb2c9cdf2a1fb0a9d446170a8d1fc3ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d915e730b08a93735fa04b5da379093220b2824c7049dc613a17302eae6c173c\"" Sep 9 00:26:50.709937 containerd[1569]: time="2025-09-09T00:26:50.709887379Z" level=info msg="StartContainer for \"d915e730b08a93735fa04b5da379093220b2824c7049dc613a17302eae6c173c\"" Sep 9 00:26:50.711217 containerd[1569]: time="2025-09-09T00:26:50.711189325Z" level=info msg="connecting to shim d915e730b08a93735fa04b5da379093220b2824c7049dc613a17302eae6c173c" address="unix:///run/containerd/s/903e8a574f3dd0dc8461a7c7a53e68d08d70ce76f05728344f9d84078cd87910" protocol=ttrpc version=3 Sep 9 00:26:50.716627 containerd[1569]: time="2025-09-09T00:26:50.716565508Z" level=info msg="CreateContainer within sandbox \"924d992cc0a1881234406b6591de840b8f445bb907a4bd376fa6800951d1d431\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"172236cd7d019231550020ecc1eb3d08e5891aedcfe1366ec1540bbacb182b82\"" Sep 9 00:26:50.717247 containerd[1569]: time="2025-09-09T00:26:50.717206733Z" level=info msg="StartContainer for \"172236cd7d019231550020ecc1eb3d08e5891aedcfe1366ec1540bbacb182b82\"" Sep 9 00:26:50.718674 containerd[1569]: time="2025-09-09T00:26:50.718626632Z" level=info msg="CreateContainer within sandbox \"3df267779652e155f2aa66f2f1677d67361b342885dadf7b27040069f1a91c3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1fc25e23100f4557a2b36fa4273760ce8b2795c2490852ecaebfdfc4d5005c54\"" Sep 9 00:26:50.719246 containerd[1569]: time="2025-09-09T00:26:50.719216630Z" level=info msg="StartContainer for \"1fc25e23100f4557a2b36fa4273760ce8b2795c2490852ecaebfdfc4d5005c54\"" Sep 9 00:26:50.719817 containerd[1569]: time="2025-09-09T00:26:50.719642145Z" level=info msg="connecting to shim 172236cd7d019231550020ecc1eb3d08e5891aedcfe1366ec1540bbacb182b82" address="unix:///run/containerd/s/087b439e29a6b26c29180c797aa939f11371b046ca19304cd765c140e5e84420" protocol=ttrpc version=3 Sep 9 00:26:50.721150 containerd[1569]: time="2025-09-09T00:26:50.721060112Z" level=info msg="connecting to shim 1fc25e23100f4557a2b36fa4273760ce8b2795c2490852ecaebfdfc4d5005c54" address="unix:///run/containerd/s/8f0e4e679b5137fed4342fddb7f8afc19d3ff8e407db701d0594aeca0ffb84b6" protocol=ttrpc version=3 Sep 9 00:26:50.744769 systemd[1]: Started cri-containerd-d915e730b08a93735fa04b5da379093220b2824c7049dc613a17302eae6c173c.scope - libcontainer container d915e730b08a93735fa04b5da379093220b2824c7049dc613a17302eae6c173c. Sep 9 00:26:50.764735 kubelet[2375]: E0909 00:26:50.764637 2375 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:26:50.775764 systemd[1]: Started cri-containerd-172236cd7d019231550020ecc1eb3d08e5891aedcfe1366ec1540bbacb182b82.scope - libcontainer container 172236cd7d019231550020ecc1eb3d08e5891aedcfe1366ec1540bbacb182b82. Sep 9 00:26:50.778089 systemd[1]: Started cri-containerd-1fc25e23100f4557a2b36fa4273760ce8b2795c2490852ecaebfdfc4d5005c54.scope - libcontainer container 1fc25e23100f4557a2b36fa4273760ce8b2795c2490852ecaebfdfc4d5005c54. Sep 9 00:26:50.881092 containerd[1569]: time="2025-09-09T00:26:50.880962730Z" level=info msg="StartContainer for \"172236cd7d019231550020ecc1eb3d08e5891aedcfe1366ec1540bbacb182b82\" returns successfully" Sep 9 00:26:50.883082 containerd[1569]: time="2025-09-09T00:26:50.883030236Z" level=info msg="StartContainer for \"1fc25e23100f4557a2b36fa4273760ce8b2795c2490852ecaebfdfc4d5005c54\" returns successfully" Sep 9 00:26:50.885361 containerd[1569]: time="2025-09-09T00:26:50.885321226Z" level=info msg="StartContainer for \"d915e730b08a93735fa04b5da379093220b2824c7049dc613a17302eae6c173c\" returns successfully" Sep 9 00:26:50.934452 kubelet[2375]: E0909 00:26:50.934259 2375 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:26:51.143703 kubelet[2375]: I0909 00:26:51.141701 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:26:51.593955 kubelet[2375]: E0909 00:26:51.593913 2375 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:26:51.594360 kubelet[2375]: E0909 00:26:51.594051 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:51.596122 kubelet[2375]: E0909 00:26:51.596088 2375 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:26:51.596494 kubelet[2375]: E0909 00:26:51.596456 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:51.599548 kubelet[2375]: E0909 00:26:51.599375 2375 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:26:51.599548 kubelet[2375]: E0909 00:26:51.599467 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:52.427001 kubelet[2375]: E0909 00:26:52.426937 2375 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:26:52.514358 kubelet[2375]: I0909 00:26:52.513556 2375 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:26:52.544172 kubelet[2375]: I0909 00:26:52.544138 2375 apiserver.go:52] "Watching apiserver" Sep 9 00:26:52.556973 kubelet[2375]: I0909 00:26:52.556930 2375 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:26:52.556973 kubelet[2375]: I0909 00:26:52.556993 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:52.565415 kubelet[2375]: E0909 00:26:52.565371 2375 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:52.565415 kubelet[2375]: I0909 00:26:52.565407 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:52.567188 kubelet[2375]: E0909 00:26:52.567166 2375 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:52.567340 kubelet[2375]: I0909 00:26:52.567272 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:52.569580 kubelet[2375]: E0909 00:26:52.569549 2375 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:52.601243 kubelet[2375]: I0909 00:26:52.601157 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:52.602193 kubelet[2375]: I0909 00:26:52.601220 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:52.602193 kubelet[2375]: I0909 00:26:52.601594 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:52.604233 kubelet[2375]: E0909 00:26:52.604132 2375 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:52.604406 kubelet[2375]: E0909 00:26:52.604314 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:52.606401 kubelet[2375]: E0909 00:26:52.606356 2375 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:52.606527 kubelet[2375]: E0909 00:26:52.606493 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:52.608253 kubelet[2375]: E0909 00:26:52.608222 2375 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:52.608397 kubelet[2375]: E0909 00:26:52.608373 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:53.603033 kubelet[2375]: I0909 00:26:53.602988 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:53.607626 kubelet[2375]: E0909 00:26:53.607585 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:53.735847 kubelet[2375]: I0909 00:26:53.735798 2375 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:53.741900 kubelet[2375]: E0909 00:26:53.741845 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:54.604379 kubelet[2375]: E0909 00:26:54.604338 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:54.604379 kubelet[2375]: E0909 00:26:54.604388 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:54.756052 systemd[1]: Reload requested from client PID 2676 ('systemctl') (unit session-7.scope)... Sep 9 00:26:54.756069 systemd[1]: Reloading... Sep 9 00:26:54.848645 zram_generator::config[2719]: No configuration found. Sep 9 00:26:55.099294 systemd[1]: Reloading finished in 342 ms. Sep 9 00:26:55.134144 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:55.156009 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:26:55.156427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:55.156498 systemd[1]: kubelet.service: Consumed 1.594s CPU time, 132.3M memory peak. Sep 9 00:26:55.165603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:26:55.395338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:26:55.404823 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:26:55.454406 kubelet[2764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:26:55.454406 kubelet[2764]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:26:55.454406 kubelet[2764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:26:55.454949 kubelet[2764]: I0909 00:26:55.454435 2764 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:26:55.462864 kubelet[2764]: I0909 00:26:55.462372 2764 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:26:55.462864 kubelet[2764]: I0909 00:26:55.462424 2764 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:26:55.463695 kubelet[2764]: I0909 00:26:55.463641 2764 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:26:55.464964 kubelet[2764]: I0909 00:26:55.464933 2764 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:26:55.467482 kubelet[2764]: I0909 00:26:55.467397 2764 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:26:55.472594 kubelet[2764]: I0909 00:26:55.472556 2764 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:26:55.478361 kubelet[2764]: I0909 00:26:55.478328 2764 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:26:55.478721 kubelet[2764]: I0909 00:26:55.478676 2764 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:26:55.478908 kubelet[2764]: I0909 00:26:55.478706 2764 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:26:55.478908 kubelet[2764]: I0909 00:26:55.478897 2764 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:26:55.478908 kubelet[2764]: I0909 00:26:55.478907 2764 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:26:55.479078 kubelet[2764]: I0909 00:26:55.478952 2764 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:26:55.479172 kubelet[2764]: I0909 00:26:55.479143 2764 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:26:55.479214 kubelet[2764]: I0909 00:26:55.479183 2764 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:26:55.479214 kubelet[2764]: I0909 00:26:55.479214 2764 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:26:55.479279 kubelet[2764]: I0909 00:26:55.479231 2764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:26:55.487285 kubelet[2764]: I0909 00:26:55.485222 2764 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:26:55.487285 kubelet[2764]: I0909 00:26:55.486092 2764 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:26:55.492631 kubelet[2764]: I0909 00:26:55.492591 2764 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:26:55.492791 kubelet[2764]: I0909 00:26:55.492706 2764 server.go:1289] "Started kubelet" Sep 9 00:26:55.494018 kubelet[2764]: I0909 00:26:55.493963 2764 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:26:55.494250 kubelet[2764]: I0909 00:26:55.494163 2764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:26:55.495732 kubelet[2764]: I0909 00:26:55.495596 2764 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:26:55.500575 kubelet[2764]: I0909 00:26:55.498925 2764 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:26:55.503302 kubelet[2764]: I0909 00:26:55.503255 2764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:26:55.507560 kubelet[2764]: I0909 00:26:55.507297 2764 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:26:55.508500 kubelet[2764]: E0909 00:26:55.508460 2764 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:26:55.509926 kubelet[2764]: I0909 00:26:55.509887 2764 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:26:55.510055 kubelet[2764]: I0909 00:26:55.510030 2764 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:26:55.510255 kubelet[2764]: I0909 00:26:55.510227 2764 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:26:55.511697 kubelet[2764]: I0909 00:26:55.511656 2764 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:26:55.513259 kubelet[2764]: I0909 00:26:55.513220 2764 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:26:55.513259 kubelet[2764]: I0909 00:26:55.513246 2764 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:26:55.522845 kubelet[2764]: I0909 00:26:55.522766 2764 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:26:55.524760 kubelet[2764]: I0909 00:26:55.524651 2764 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:26:55.524760 kubelet[2764]: I0909 00:26:55.524735 2764 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:26:55.524860 kubelet[2764]: I0909 00:26:55.524799 2764 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:26:55.524860 kubelet[2764]: I0909 00:26:55.524811 2764 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:26:55.525525 kubelet[2764]: E0909 00:26:55.524897 2764 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:26:55.574231 kubelet[2764]: I0909 00:26:55.574177 2764 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:26:55.574231 kubelet[2764]: I0909 00:26:55.574214 2764 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:26:55.574410 kubelet[2764]: I0909 00:26:55.574263 2764 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:26:55.574580 kubelet[2764]: I0909 00:26:55.574547 2764 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:26:55.574614 kubelet[2764]: I0909 00:26:55.574567 2764 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:26:55.574614 kubelet[2764]: I0909 00:26:55.574590 2764 policy_none.go:49] "None policy: Start" Sep 9 00:26:55.574614 kubelet[2764]: I0909 00:26:55.574603 2764 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:26:55.574677 kubelet[2764]: I0909 00:26:55.574615 2764 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:26:55.574771 kubelet[2764]: I0909 00:26:55.574752 2764 state_mem.go:75] "Updated machine memory state" Sep 9 00:26:55.579738 kubelet[2764]: E0909 00:26:55.579690 2764 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:26:55.579927 kubelet[2764]: I0909 00:26:55.579909 2764 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:26:55.579967 kubelet[2764]: I0909 00:26:55.579925 2764 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:26:55.581523 kubelet[2764]: I0909 00:26:55.580617 2764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:26:55.583482 kubelet[2764]: E0909 00:26:55.583183 2764 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:26:55.626233 kubelet[2764]: I0909 00:26:55.626197 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:55.626534 kubelet[2764]: I0909 00:26:55.626365 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:55.626718 kubelet[2764]: I0909 00:26:55.626492 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:55.634445 kubelet[2764]: E0909 00:26:55.634023 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:55.634445 kubelet[2764]: E0909 00:26:55.634257 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:55.687752 kubelet[2764]: I0909 00:26:55.687604 2764 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:26:55.694264 kubelet[2764]: I0909 00:26:55.694222 2764 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:26:55.694480 kubelet[2764]: I0909 00:26:55.694315 2764 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:26:55.811796 kubelet[2764]: I0909 00:26:55.811746 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:55.811796 kubelet[2764]: I0909 00:26:55.811788 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae0056a3ee2f21f385d60082831f6042-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0056a3ee2f21f385d60082831f6042\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:55.811796 kubelet[2764]: I0909 00:26:55.811813 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae0056a3ee2f21f385d60082831f6042-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae0056a3ee2f21f385d60082831f6042\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:55.812003 kubelet[2764]: I0909 00:26:55.811838 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae0056a3ee2f21f385d60082831f6042-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae0056a3ee2f21f385d60082831f6042\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:55.812003 kubelet[2764]: I0909 00:26:55.811864 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:55.812003 kubelet[2764]: I0909 00:26:55.811892 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:55.812003 kubelet[2764]: I0909 00:26:55.811919 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:55.812003 kubelet[2764]: I0909 00:26:55.811943 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:55.812133 kubelet[2764]: I0909 00:26:55.811969 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:55.934643 kubelet[2764]: E0909 00:26:55.934587 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:55.934643 kubelet[2764]: E0909 00:26:55.934587 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:55.935699 kubelet[2764]: E0909 00:26:55.935670 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:56.481753 kubelet[2764]: I0909 00:26:56.481674 2764 apiserver.go:52] "Watching apiserver" Sep 9 00:26:56.510784 kubelet[2764]: I0909 00:26:56.510724 2764 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:26:56.549048 kubelet[2764]: I0909 00:26:56.548987 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:56.551651 kubelet[2764]: I0909 00:26:56.549528 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:56.551651 kubelet[2764]: I0909 00:26:56.549737 2764 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:56.557864 kubelet[2764]: E0909 00:26:56.557792 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:26:56.557864 kubelet[2764]: E0909 00:26:56.557825 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:26:56.558141 kubelet[2764]: E0909 00:26:56.557990 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:56.558141 kubelet[2764]: E0909 00:26:56.558038 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:56.558141 kubelet[2764]: E0909 00:26:56.557802 2764 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:26:56.558268 kubelet[2764]: E0909 00:26:56.558191 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:56.570106 kubelet[2764]: I0909 00:26:56.570018 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.569963761 podStartE2EDuration="1.569963761s" podCreationTimestamp="2025-09-09 00:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:26:56.569906322 +0000 UTC m=+1.160964853" watchObservedRunningTime="2025-09-09 00:26:56.569963761 +0000 UTC m=+1.161022301" Sep 9 00:26:56.586651 kubelet[2764]: I0909 00:26:56.586055 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5860323579999998 podStartE2EDuration="3.586032358s" podCreationTimestamp="2025-09-09 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:26:56.584822674 +0000 UTC m=+1.175881164" watchObservedRunningTime="2025-09-09 00:26:56.586032358 +0000 UTC m=+1.177090858" Sep 9 00:26:56.586651 kubelet[2764]: I0909 00:26:56.586156 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.586151373 podStartE2EDuration="3.586151373s" podCreationTimestamp="2025-09-09 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:26:56.577983542 +0000 UTC m=+1.169042032" watchObservedRunningTime="2025-09-09 00:26:56.586151373 +0000 UTC m=+1.177209873" Sep 9 00:26:57.551335 kubelet[2764]: E0909 00:26:57.551070 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:57.551335 kubelet[2764]: E0909 00:26:57.551135 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:57.551335 kubelet[2764]: E0909 00:26:57.551261 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:58.552692 kubelet[2764]: E0909 00:26:58.552631 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:26:59.238359 kubelet[2764]: E0909 00:26:59.238304 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:00.324942 kubelet[2764]: I0909 00:27:00.324902 2764 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:27:00.325477 kubelet[2764]: I0909 00:27:00.325442 2764 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:27:00.325533 containerd[1569]: time="2025-09-09T00:27:00.325214642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:27:01.067580 systemd[1]: Created slice kubepods-besteffort-pod0d32b572_b153_49c9_91c7_d6b2bbf0aeb0.slice - libcontainer container kubepods-besteffort-pod0d32b572_b153_49c9_91c7_d6b2bbf0aeb0.slice. Sep 9 00:27:01.089104 systemd[1]: Created slice kubepods-besteffort-pod675678f1_3750_49db_8ec0_fd77869c80a4.slice - libcontainer container kubepods-besteffort-pod675678f1_3750_49db_8ec0_fd77869c80a4.slice. Sep 9 00:27:01.145096 kubelet[2764]: I0909 00:27:01.145027 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlsq\" (UniqueName: \"kubernetes.io/projected/0d32b572-b153-49c9-91c7-d6b2bbf0aeb0-kube-api-access-fxlsq\") pod \"tigera-operator-755d956888-fm75j\" (UID: \"0d32b572-b153-49c9-91c7-d6b2bbf0aeb0\") " pod="tigera-operator/tigera-operator-755d956888-fm75j" Sep 9 00:27:01.145096 kubelet[2764]: I0909 00:27:01.145074 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/675678f1-3750-49db-8ec0-fd77869c80a4-lib-modules\") pod \"kube-proxy-2dg5q\" (UID: \"675678f1-3750-49db-8ec0-fd77869c80a4\") " pod="kube-system/kube-proxy-2dg5q" Sep 9 00:27:01.145096 kubelet[2764]: I0909 00:27:01.145094 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/675678f1-3750-49db-8ec0-fd77869c80a4-kube-proxy\") pod \"kube-proxy-2dg5q\" (UID: \"675678f1-3750-49db-8ec0-fd77869c80a4\") " pod="kube-system/kube-proxy-2dg5q" Sep 9 00:27:01.145096 kubelet[2764]: I0909 00:27:01.145109 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/675678f1-3750-49db-8ec0-fd77869c80a4-xtables-lock\") pod \"kube-proxy-2dg5q\" (UID: \"675678f1-3750-49db-8ec0-fd77869c80a4\") " pod="kube-system/kube-proxy-2dg5q" Sep 9 00:27:01.145355 kubelet[2764]: I0909 00:27:01.145125 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqx75\" (UniqueName: \"kubernetes.io/projected/675678f1-3750-49db-8ec0-fd77869c80a4-kube-api-access-jqx75\") pod \"kube-proxy-2dg5q\" (UID: \"675678f1-3750-49db-8ec0-fd77869c80a4\") " pod="kube-system/kube-proxy-2dg5q" Sep 9 00:27:01.145355 kubelet[2764]: I0909 00:27:01.145140 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d32b572-b153-49c9-91c7-d6b2bbf0aeb0-var-lib-calico\") pod \"tigera-operator-755d956888-fm75j\" (UID: \"0d32b572-b153-49c9-91c7-d6b2bbf0aeb0\") " pod="tigera-operator/tigera-operator-755d956888-fm75j" Sep 9 00:27:01.383538 containerd[1569]: time="2025-09-09T00:27:01.383369167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-fm75j,Uid:0d32b572-b153-49c9-91c7-d6b2bbf0aeb0,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:27:01.392798 kubelet[2764]: E0909 00:27:01.392759 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:01.393378 containerd[1569]: time="2025-09-09T00:27:01.393326530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dg5q,Uid:675678f1-3750-49db-8ec0-fd77869c80a4,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:01.408486 containerd[1569]: time="2025-09-09T00:27:01.408417869Z" level=info msg="connecting to shim 38cda9bcd8ea96cbc351f51f598e7c03b6de081aa7abca125fc65835504e3985" address="unix:///run/containerd/s/28094e103f60a36a12af90dc428607890f99d4716fb7e0169bde23e07b25df4f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:27:01.428401 containerd[1569]: time="2025-09-09T00:27:01.428329291Z" level=info msg="connecting to shim 0f3c53724f58e5d84036439f1be770169a7ba7220048928f655274e01fd715a9" address="unix:///run/containerd/s/c8bd8cc485fafcee1f32ca6985d90e840a5dfc4d00f69b6842f77e4a0b990756" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:27:01.447677 systemd[1]: Started cri-containerd-38cda9bcd8ea96cbc351f51f598e7c03b6de081aa7abca125fc65835504e3985.scope - libcontainer container 38cda9bcd8ea96cbc351f51f598e7c03b6de081aa7abca125fc65835504e3985. Sep 9 00:27:01.453914 systemd[1]: Started cri-containerd-0f3c53724f58e5d84036439f1be770169a7ba7220048928f655274e01fd715a9.scope - libcontainer container 0f3c53724f58e5d84036439f1be770169a7ba7220048928f655274e01fd715a9. Sep 9 00:27:01.515199 containerd[1569]: time="2025-09-09T00:27:01.515122444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dg5q,Uid:675678f1-3750-49db-8ec0-fd77869c80a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f3c53724f58e5d84036439f1be770169a7ba7220048928f655274e01fd715a9\"" Sep 9 00:27:01.516269 kubelet[2764]: E0909 00:27:01.516231 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:01.524038 containerd[1569]: time="2025-09-09T00:27:01.523919862Z" level=info msg="CreateContainer within sandbox \"0f3c53724f58e5d84036439f1be770169a7ba7220048928f655274e01fd715a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:27:01.524810 containerd[1569]: time="2025-09-09T00:27:01.524756309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-fm75j,Uid:0d32b572-b153-49c9-91c7-d6b2bbf0aeb0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"38cda9bcd8ea96cbc351f51f598e7c03b6de081aa7abca125fc65835504e3985\"" Sep 9 00:27:01.527639 containerd[1569]: time="2025-09-09T00:27:01.527586001Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:27:01.541181 containerd[1569]: time="2025-09-09T00:27:01.541125234Z" level=info msg="Container 48567ee5641ad151597c22c5feef0ea7e3d8e5eb282d50b7ec9979b48e90a5a6: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:27:01.549878 containerd[1569]: time="2025-09-09T00:27:01.549822213Z" level=info msg="CreateContainer within sandbox \"0f3c53724f58e5d84036439f1be770169a7ba7220048928f655274e01fd715a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48567ee5641ad151597c22c5feef0ea7e3d8e5eb282d50b7ec9979b48e90a5a6\"" Sep 9 00:27:01.550541 containerd[1569]: time="2025-09-09T00:27:01.550494380Z" level=info msg="StartContainer for \"48567ee5641ad151597c22c5feef0ea7e3d8e5eb282d50b7ec9979b48e90a5a6\"" Sep 9 00:27:01.552451 containerd[1569]: time="2025-09-09T00:27:01.552426341Z" level=info msg="connecting to shim 48567ee5641ad151597c22c5feef0ea7e3d8e5eb282d50b7ec9979b48e90a5a6" address="unix:///run/containerd/s/c8bd8cc485fafcee1f32ca6985d90e840a5dfc4d00f69b6842f77e4a0b990756" protocol=ttrpc version=3 Sep 9 00:27:01.578692 systemd[1]: Started cri-containerd-48567ee5641ad151597c22c5feef0ea7e3d8e5eb282d50b7ec9979b48e90a5a6.scope - libcontainer container 48567ee5641ad151597c22c5feef0ea7e3d8e5eb282d50b7ec9979b48e90a5a6. Sep 9 00:27:01.630088 containerd[1569]: time="2025-09-09T00:27:01.630025530Z" level=info msg="StartContainer for \"48567ee5641ad151597c22c5feef0ea7e3d8e5eb282d50b7ec9979b48e90a5a6\" returns successfully" Sep 9 00:27:02.173560 kubelet[2764]: E0909 00:27:02.173523 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:02.566170 kubelet[2764]: E0909 00:27:02.566099 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:02.566934 kubelet[2764]: E0909 00:27:02.566911 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:02.575947 kubelet[2764]: I0909 00:27:02.575799 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2dg5q" podStartSLOduration=1.5757803030000002 podStartE2EDuration="1.575780303s" podCreationTimestamp="2025-09-09 00:27:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:27:02.575435674 +0000 UTC m=+7.166494174" watchObservedRunningTime="2025-09-09 00:27:02.575780303 +0000 UTC m=+7.166838804" Sep 9 00:27:02.630329 kubelet[2764]: E0909 00:27:02.630286 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:03.175019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3148194733.mount: Deactivated successfully. Sep 9 00:27:03.568245 kubelet[2764]: E0909 00:27:03.568197 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:03.568862 kubelet[2764]: E0909 00:27:03.568422 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:03.568862 kubelet[2764]: E0909 00:27:03.568745 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:07.127320 containerd[1569]: time="2025-09-09T00:27:07.127239821Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:07.128167 containerd[1569]: time="2025-09-09T00:27:07.128135876Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:27:07.129624 containerd[1569]: time="2025-09-09T00:27:07.129579582Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:07.132626 containerd[1569]: time="2025-09-09T00:27:07.132566021Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:07.133299 containerd[1569]: time="2025-09-09T00:27:07.133243766Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.605616968s" Sep 9 00:27:07.133299 containerd[1569]: time="2025-09-09T00:27:07.133294241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:27:07.139934 containerd[1569]: time="2025-09-09T00:27:07.139888477Z" level=info msg="CreateContainer within sandbox \"38cda9bcd8ea96cbc351f51f598e7c03b6de081aa7abca125fc65835504e3985\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:27:07.151097 containerd[1569]: time="2025-09-09T00:27:07.151018588Z" level=info msg="Container dc616a9d535651595ba51909355c2571f321be8d9614caf8ab79390eb1449a8f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:27:07.161276 containerd[1569]: time="2025-09-09T00:27:07.161224549Z" level=info msg="CreateContainer within sandbox \"38cda9bcd8ea96cbc351f51f598e7c03b6de081aa7abca125fc65835504e3985\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dc616a9d535651595ba51909355c2571f321be8d9614caf8ab79390eb1449a8f\"" Sep 9 00:27:07.161902 containerd[1569]: time="2025-09-09T00:27:07.161877307Z" level=info msg="StartContainer for \"dc616a9d535651595ba51909355c2571f321be8d9614caf8ab79390eb1449a8f\"" Sep 9 00:27:07.162932 containerd[1569]: time="2025-09-09T00:27:07.162893048Z" level=info msg="connecting to shim dc616a9d535651595ba51909355c2571f321be8d9614caf8ab79390eb1449a8f" address="unix:///run/containerd/s/28094e103f60a36a12af90dc428607890f99d4716fb7e0169bde23e07b25df4f" protocol=ttrpc version=3 Sep 9 00:27:07.230713 systemd[1]: Started cri-containerd-dc616a9d535651595ba51909355c2571f321be8d9614caf8ab79390eb1449a8f.scope - libcontainer container dc616a9d535651595ba51909355c2571f321be8d9614caf8ab79390eb1449a8f. Sep 9 00:27:07.358607 containerd[1569]: time="2025-09-09T00:27:07.358490703Z" level=info msg="StartContainer for \"dc616a9d535651595ba51909355c2571f321be8d9614caf8ab79390eb1449a8f\" returns successfully" Sep 9 00:27:07.647429 kubelet[2764]: I0909 00:27:07.647350 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-fm75j" podStartSLOduration=2.039792208 podStartE2EDuration="7.647330975s" podCreationTimestamp="2025-09-09 00:27:00 +0000 UTC" firstStartedPulling="2025-09-09 00:27:01.526564636 +0000 UTC m=+6.117623136" lastFinishedPulling="2025-09-09 00:27:07.134103403 +0000 UTC m=+11.725161903" observedRunningTime="2025-09-09 00:27:07.647278225 +0000 UTC m=+12.238336725" watchObservedRunningTime="2025-09-09 00:27:07.647330975 +0000 UTC m=+12.238389475" Sep 9 00:27:09.247200 kubelet[2764]: E0909 00:27:09.247140 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:09.584047 kubelet[2764]: E0909 00:27:09.583983 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:13.026552 sudo[1779]: pam_unix(sudo:session): session closed for user root Sep 9 00:27:13.028642 sshd[1778]: Connection closed by 10.0.0.1 port 45478 Sep 9 00:27:13.029672 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Sep 9 00:27:13.036983 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:27:13.039040 systemd[1]: sshd@6-10.0.0.40:22-10.0.0.1:45478.service: Deactivated successfully. Sep 9 00:27:13.043132 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:27:13.043855 systemd[1]: session-7.scope: Consumed 6.131s CPU time, 226.8M memory peak. Sep 9 00:27:13.048742 systemd-logind[1517]: Removed session 7. Sep 9 00:27:16.668723 systemd[1]: Created slice kubepods-besteffort-pod376dad54_1d74_4b46_95b9_a988b68de4c2.slice - libcontainer container kubepods-besteffort-pod376dad54_1d74_4b46_95b9_a988b68de4c2.slice. Sep 9 00:27:16.754223 kubelet[2764]: I0909 00:27:16.754141 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvmdx\" (UniqueName: \"kubernetes.io/projected/376dad54-1d74-4b46-95b9-a988b68de4c2-kube-api-access-vvmdx\") pod \"calico-typha-5bb7d4c6-tnclx\" (UID: \"376dad54-1d74-4b46-95b9-a988b68de4c2\") " pod="calico-system/calico-typha-5bb7d4c6-tnclx" Sep 9 00:27:16.754223 kubelet[2764]: I0909 00:27:16.754199 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/376dad54-1d74-4b46-95b9-a988b68de4c2-typha-certs\") pod \"calico-typha-5bb7d4c6-tnclx\" (UID: \"376dad54-1d74-4b46-95b9-a988b68de4c2\") " pod="calico-system/calico-typha-5bb7d4c6-tnclx" Sep 9 00:27:16.754844 kubelet[2764]: I0909 00:27:16.754294 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/376dad54-1d74-4b46-95b9-a988b68de4c2-tigera-ca-bundle\") pod \"calico-typha-5bb7d4c6-tnclx\" (UID: \"376dad54-1d74-4b46-95b9-a988b68de4c2\") " pod="calico-system/calico-typha-5bb7d4c6-tnclx" Sep 9 00:27:16.821744 systemd[1]: Created slice kubepods-besteffort-pod619f08b8_efbd_4edb_b71f_4f4d6da90262.slice - libcontainer container kubepods-besteffort-pod619f08b8_efbd_4edb_b71f_4f4d6da90262.slice. Sep 9 00:27:16.855085 kubelet[2764]: I0909 00:27:16.855019 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-cni-log-dir\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.855085 kubelet[2764]: I0909 00:27:16.855080 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/619f08b8-efbd-4edb-b71f-4f4d6da90262-tigera-ca-bundle\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.855300 kubelet[2764]: I0909 00:27:16.855203 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-flexvol-driver-host\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.855300 kubelet[2764]: I0909 00:27:16.855232 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8gp\" (UniqueName: \"kubernetes.io/projected/619f08b8-efbd-4edb-b71f-4f4d6da90262-kube-api-access-9l8gp\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856527 kubelet[2764]: I0909 00:27:16.855419 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-policysync\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856527 kubelet[2764]: I0909 00:27:16.855517 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-cni-bin-dir\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856527 kubelet[2764]: I0909 00:27:16.855546 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-cni-net-dir\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856527 kubelet[2764]: I0909 00:27:16.855584 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-lib-modules\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856527 kubelet[2764]: I0909 00:27:16.855605 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/619f08b8-efbd-4edb-b71f-4f4d6da90262-node-certs\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856673 kubelet[2764]: I0909 00:27:16.855631 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-var-lib-calico\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856673 kubelet[2764]: I0909 00:27:16.855651 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-xtables-lock\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.856673 kubelet[2764]: I0909 00:27:16.855677 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/619f08b8-efbd-4edb-b71f-4f4d6da90262-var-run-calico\") pod \"calico-node-5bdbf\" (UID: \"619f08b8-efbd-4edb-b71f-4f4d6da90262\") " pod="calico-system/calico-node-5bdbf" Sep 9 00:27:16.922692 kubelet[2764]: E0909 00:27:16.921805 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:16.957058 kubelet[2764]: I0909 00:27:16.957005 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf-registration-dir\") pod \"csi-node-driver-t4gtb\" (UID: \"0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf\") " pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:16.957241 kubelet[2764]: I0909 00:27:16.957116 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf-socket-dir\") pod \"csi-node-driver-t4gtb\" (UID: \"0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf\") " pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:16.957300 kubelet[2764]: I0909 00:27:16.957251 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf4sp\" (UniqueName: \"kubernetes.io/projected/0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf-kube-api-access-rf4sp\") pod \"csi-node-driver-t4gtb\" (UID: \"0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf\") " pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:16.957408 kubelet[2764]: I0909 00:27:16.957325 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf-kubelet-dir\") pod \"csi-node-driver-t4gtb\" (UID: \"0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf\") " pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:16.957408 kubelet[2764]: I0909 00:27:16.957365 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf-varrun\") pod \"csi-node-driver-t4gtb\" (UID: \"0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf\") " pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:16.963049 kubelet[2764]: E0909 00:27:16.963012 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:16.963049 kubelet[2764]: W0909 00:27:16.963046 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:16.963173 kubelet[2764]: E0909 00:27:16.963119 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:16.963580 kubelet[2764]: E0909 00:27:16.963553 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:16.963580 kubelet[2764]: W0909 00:27:16.963568 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:16.963652 kubelet[2764]: E0909 00:27:16.963581 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:16.969560 kubelet[2764]: E0909 00:27:16.969426 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:16.969560 kubelet[2764]: W0909 00:27:16.969458 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:16.969560 kubelet[2764]: E0909 00:27:16.969483 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:16.972845 kubelet[2764]: E0909 00:27:16.972804 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:16.975959 containerd[1569]: time="2025-09-09T00:27:16.975895970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb7d4c6-tnclx,Uid:376dad54-1d74-4b46-95b9-a988b68de4c2,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:17.023625 containerd[1569]: time="2025-09-09T00:27:17.023541601Z" level=info msg="connecting to shim 8e9337158fe6604c920d7dfcec63f8623efd8024372fd2e2fe181e356180f8fa" address="unix:///run/containerd/s/c3465cbb955f7dd04bf69989e6fe14c471a12e93b57d13c651c2f65436ebf855" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:27:17.056008 systemd[1]: Started cri-containerd-8e9337158fe6604c920d7dfcec63f8623efd8024372fd2e2fe181e356180f8fa.scope - libcontainer container 8e9337158fe6604c920d7dfcec63f8623efd8024372fd2e2fe181e356180f8fa. Sep 9 00:27:17.058665 kubelet[2764]: E0909 00:27:17.058594 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.058665 kubelet[2764]: W0909 00:27:17.058624 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.058665 kubelet[2764]: E0909 00:27:17.058646 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.060741 kubelet[2764]: E0909 00:27:17.060702 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.060741 kubelet[2764]: W0909 00:27:17.060729 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.060741 kubelet[2764]: E0909 00:27:17.060748 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.061431 kubelet[2764]: E0909 00:27:17.061379 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.061431 kubelet[2764]: W0909 00:27:17.061400 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.061431 kubelet[2764]: E0909 00:27:17.061412 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.061712 kubelet[2764]: E0909 00:27:17.061674 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.061712 kubelet[2764]: W0909 00:27:17.061691 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.061712 kubelet[2764]: E0909 00:27:17.061701 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.061993 kubelet[2764]: E0909 00:27:17.061976 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.061993 kubelet[2764]: W0909 00:27:17.061989 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.062169 kubelet[2764]: E0909 00:27:17.061999 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.062264 kubelet[2764]: E0909 00:27:17.062245 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.062264 kubelet[2764]: W0909 00:27:17.062257 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.062401 kubelet[2764]: E0909 00:27:17.062267 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.062499 kubelet[2764]: E0909 00:27:17.062482 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.062499 kubelet[2764]: W0909 00:27:17.062495 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.062593 kubelet[2764]: E0909 00:27:17.062529 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.062928 kubelet[2764]: E0909 00:27:17.062909 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.062928 kubelet[2764]: W0909 00:27:17.062924 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.062991 kubelet[2764]: E0909 00:27:17.062934 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.063198 kubelet[2764]: E0909 00:27:17.063180 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.063198 kubelet[2764]: W0909 00:27:17.063195 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.063293 kubelet[2764]: E0909 00:27:17.063235 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.063558 kubelet[2764]: E0909 00:27:17.063532 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.063558 kubelet[2764]: W0909 00:27:17.063550 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.063664 kubelet[2764]: E0909 00:27:17.063563 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.063895 kubelet[2764]: E0909 00:27:17.063863 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.063933 kubelet[2764]: W0909 00:27:17.063900 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.063933 kubelet[2764]: E0909 00:27:17.063911 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.064145 kubelet[2764]: E0909 00:27:17.064125 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.064211 kubelet[2764]: W0909 00:27:17.064141 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.064211 kubelet[2764]: E0909 00:27:17.064173 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.064396 kubelet[2764]: E0909 00:27:17.064367 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.064461 kubelet[2764]: W0909 00:27:17.064384 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.064461 kubelet[2764]: E0909 00:27:17.064411 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.064691 kubelet[2764]: E0909 00:27:17.064667 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.064691 kubelet[2764]: W0909 00:27:17.064681 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.064785 kubelet[2764]: E0909 00:27:17.064708 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.064918 kubelet[2764]: E0909 00:27:17.064900 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.064918 kubelet[2764]: W0909 00:27:17.064911 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.064918 kubelet[2764]: E0909 00:27:17.064919 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.065130 kubelet[2764]: E0909 00:27:17.065109 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.065130 kubelet[2764]: W0909 00:27:17.065120 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.065130 kubelet[2764]: E0909 00:27:17.065128 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.065336 kubelet[2764]: E0909 00:27:17.065308 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.065336 kubelet[2764]: W0909 00:27:17.065320 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.065481 kubelet[2764]: E0909 00:27:17.065346 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.065621 kubelet[2764]: E0909 00:27:17.065589 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.065621 kubelet[2764]: W0909 00:27:17.065605 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.065621 kubelet[2764]: E0909 00:27:17.065617 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.065924 kubelet[2764]: E0909 00:27:17.065902 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.065924 kubelet[2764]: W0909 00:27:17.065918 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.066020 kubelet[2764]: E0909 00:27:17.065930 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.066213 kubelet[2764]: E0909 00:27:17.066172 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.066213 kubelet[2764]: W0909 00:27:17.066187 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.066339 kubelet[2764]: E0909 00:27:17.066219 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.066434 kubelet[2764]: E0909 00:27:17.066410 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.066434 kubelet[2764]: W0909 00:27:17.066420 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.066434 kubelet[2764]: E0909 00:27:17.066429 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.067294 kubelet[2764]: E0909 00:27:17.067268 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.067294 kubelet[2764]: W0909 00:27:17.067288 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.067391 kubelet[2764]: E0909 00:27:17.067302 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.069061 kubelet[2764]: E0909 00:27:17.069012 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.069398 kubelet[2764]: W0909 00:27:17.069179 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.069398 kubelet[2764]: E0909 00:27:17.069221 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.069804 kubelet[2764]: E0909 00:27:17.069786 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.069878 kubelet[2764]: W0909 00:27:17.069863 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.069965 kubelet[2764]: E0909 00:27:17.069946 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.070800 kubelet[2764]: E0909 00:27:17.070783 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.070907 kubelet[2764]: W0909 00:27:17.070893 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.071009 kubelet[2764]: E0909 00:27:17.070990 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.094797 kubelet[2764]: E0909 00:27:17.094677 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:17.094797 kubelet[2764]: W0909 00:27:17.094711 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:17.094797 kubelet[2764]: E0909 00:27:17.094738 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:17.126828 containerd[1569]: time="2025-09-09T00:27:17.126769018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bb7d4c6-tnclx,Uid:376dad54-1d74-4b46-95b9-a988b68de4c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e9337158fe6604c920d7dfcec63f8623efd8024372fd2e2fe181e356180f8fa\"" Sep 9 00:27:17.127663 kubelet[2764]: E0909 00:27:17.127627 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:17.127987 containerd[1569]: time="2025-09-09T00:27:17.127958242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5bdbf,Uid:619f08b8-efbd-4edb-b71f-4f4d6da90262,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:17.129250 containerd[1569]: time="2025-09-09T00:27:17.129226605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:27:17.159227 containerd[1569]: time="2025-09-09T00:27:17.159165921Z" level=info msg="connecting to shim 8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14" address="unix:///run/containerd/s/bd0e3d99169cf6b9442bd0cdfe1c87a0cda3f6e1e73e4fe87b6336fb6a98f031" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:27:17.185725 systemd[1]: Started cri-containerd-8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14.scope - libcontainer container 8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14. Sep 9 00:27:17.220073 containerd[1569]: time="2025-09-09T00:27:17.219999909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5bdbf,Uid:619f08b8-efbd-4edb-b71f-4f4d6da90262,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14\"" Sep 9 00:27:18.525234 kubelet[2764]: E0909 00:27:18.525172 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:19.109443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount10379813.mount: Deactivated successfully. Sep 9 00:27:20.017081 containerd[1569]: time="2025-09-09T00:27:20.016985387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:20.018745 containerd[1569]: time="2025-09-09T00:27:20.018695527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:27:20.020559 containerd[1569]: time="2025-09-09T00:27:20.020416008Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:20.023675 containerd[1569]: time="2025-09-09T00:27:20.023594887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:20.026528 containerd[1569]: time="2025-09-09T00:27:20.024668574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.895399349s" Sep 9 00:27:20.026528 containerd[1569]: time="2025-09-09T00:27:20.024740088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:27:20.027995 containerd[1569]: time="2025-09-09T00:27:20.027962449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:27:20.044104 containerd[1569]: time="2025-09-09T00:27:20.044034568Z" level=info msg="CreateContainer within sandbox \"8e9337158fe6604c920d7dfcec63f8623efd8024372fd2e2fe181e356180f8fa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:27:20.063106 containerd[1569]: time="2025-09-09T00:27:20.063037942Z" level=info msg="Container ea74cbdff85f9d06699ada6cb08620bf978f27cbcc665e2f6ded928868c39c7b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:27:20.076377 containerd[1569]: time="2025-09-09T00:27:20.076301628Z" level=info msg="CreateContainer within sandbox \"8e9337158fe6604c920d7dfcec63f8623efd8024372fd2e2fe181e356180f8fa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea74cbdff85f9d06699ada6cb08620bf978f27cbcc665e2f6ded928868c39c7b\"" Sep 9 00:27:20.077445 containerd[1569]: time="2025-09-09T00:27:20.077401313Z" level=info msg="StartContainer for \"ea74cbdff85f9d06699ada6cb08620bf978f27cbcc665e2f6ded928868c39c7b\"" Sep 9 00:27:20.078880 containerd[1569]: time="2025-09-09T00:27:20.078853861Z" level=info msg="connecting to shim ea74cbdff85f9d06699ada6cb08620bf978f27cbcc665e2f6ded928868c39c7b" address="unix:///run/containerd/s/c3465cbb955f7dd04bf69989e6fe14c471a12e93b57d13c651c2f65436ebf855" protocol=ttrpc version=3 Sep 9 00:27:20.107722 systemd[1]: Started cri-containerd-ea74cbdff85f9d06699ada6cb08620bf978f27cbcc665e2f6ded928868c39c7b.scope - libcontainer container ea74cbdff85f9d06699ada6cb08620bf978f27cbcc665e2f6ded928868c39c7b. Sep 9 00:27:20.326633 containerd[1569]: time="2025-09-09T00:27:20.326575763Z" level=info msg="StartContainer for \"ea74cbdff85f9d06699ada6cb08620bf978f27cbcc665e2f6ded928868c39c7b\" returns successfully" Sep 9 00:27:20.526145 kubelet[2764]: E0909 00:27:20.525664 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:20.615378 kubelet[2764]: E0909 00:27:20.615216 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:20.642325 kubelet[2764]: I0909 00:27:20.642183 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bb7d4c6-tnclx" podStartSLOduration=1.743477019 podStartE2EDuration="4.642161397s" podCreationTimestamp="2025-09-09 00:27:16 +0000 UTC" firstStartedPulling="2025-09-09 00:27:17.128779695 +0000 UTC m=+21.719838195" lastFinishedPulling="2025-09-09 00:27:20.027464073 +0000 UTC m=+24.618522573" observedRunningTime="2025-09-09 00:27:20.640353212 +0000 UTC m=+25.231411732" watchObservedRunningTime="2025-09-09 00:27:20.642161397 +0000 UTC m=+25.233219897" Sep 9 00:27:20.668629 kubelet[2764]: E0909 00:27:20.668577 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.668629 kubelet[2764]: W0909 00:27:20.668612 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.668629 kubelet[2764]: E0909 00:27:20.668642 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.669154 kubelet[2764]: E0909 00:27:20.668882 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.669154 kubelet[2764]: W0909 00:27:20.668894 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.669154 kubelet[2764]: E0909 00:27:20.668905 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.669154 kubelet[2764]: E0909 00:27:20.669131 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.669154 kubelet[2764]: W0909 00:27:20.669142 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.669154 kubelet[2764]: E0909 00:27:20.669153 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.669429 kubelet[2764]: E0909 00:27:20.669417 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.669429 kubelet[2764]: W0909 00:27:20.669429 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.669498 kubelet[2764]: E0909 00:27:20.669441 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.669733 kubelet[2764]: E0909 00:27:20.669699 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.669733 kubelet[2764]: W0909 00:27:20.669721 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.669733 kubelet[2764]: E0909 00:27:20.669734 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.669954 kubelet[2764]: E0909 00:27:20.669926 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.669954 kubelet[2764]: W0909 00:27:20.669945 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.670050 kubelet[2764]: E0909 00:27:20.669958 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.670199 kubelet[2764]: E0909 00:27:20.670177 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.670199 kubelet[2764]: W0909 00:27:20.670196 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.670413 kubelet[2764]: E0909 00:27:20.670209 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.670481 kubelet[2764]: E0909 00:27:20.670437 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.670481 kubelet[2764]: W0909 00:27:20.670450 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.670481 kubelet[2764]: E0909 00:27:20.670462 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.670811 kubelet[2764]: E0909 00:27:20.670696 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.670811 kubelet[2764]: W0909 00:27:20.670708 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.670811 kubelet[2764]: E0909 00:27:20.670721 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.670971 kubelet[2764]: E0909 00:27:20.670925 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.670971 kubelet[2764]: W0909 00:27:20.670937 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.670971 kubelet[2764]: E0909 00:27:20.670948 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.671574 kubelet[2764]: E0909 00:27:20.671152 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.671574 kubelet[2764]: W0909 00:27:20.671163 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.671574 kubelet[2764]: E0909 00:27:20.671174 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.671574 kubelet[2764]: E0909 00:27:20.671408 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.671574 kubelet[2764]: W0909 00:27:20.671419 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.671574 kubelet[2764]: E0909 00:27:20.671430 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.671998 kubelet[2764]: E0909 00:27:20.671969 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.671998 kubelet[2764]: W0909 00:27:20.671991 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.671998 kubelet[2764]: E0909 00:27:20.672004 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.672277 kubelet[2764]: E0909 00:27:20.672237 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.672277 kubelet[2764]: W0909 00:27:20.672265 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.672277 kubelet[2764]: E0909 00:27:20.672278 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.672555 kubelet[2764]: E0909 00:27:20.672521 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.672555 kubelet[2764]: W0909 00:27:20.672539 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.672555 kubelet[2764]: E0909 00:27:20.672551 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.689238 kubelet[2764]: E0909 00:27:20.689177 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.689238 kubelet[2764]: W0909 00:27:20.689219 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.689466 kubelet[2764]: E0909 00:27:20.689268 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.689760 kubelet[2764]: E0909 00:27:20.689728 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.689760 kubelet[2764]: W0909 00:27:20.689754 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.689846 kubelet[2764]: E0909 00:27:20.689769 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.690165 kubelet[2764]: E0909 00:27:20.690117 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.690165 kubelet[2764]: W0909 00:27:20.690139 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.690165 kubelet[2764]: E0909 00:27:20.690151 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.690674 kubelet[2764]: E0909 00:27:20.690643 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.690674 kubelet[2764]: W0909 00:27:20.690666 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.690766 kubelet[2764]: E0909 00:27:20.690682 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.691831 kubelet[2764]: E0909 00:27:20.691775 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.691899 kubelet[2764]: W0909 00:27:20.691833 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.691899 kubelet[2764]: E0909 00:27:20.691868 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.692387 kubelet[2764]: E0909 00:27:20.692361 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.692387 kubelet[2764]: W0909 00:27:20.692378 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.692387 kubelet[2764]: E0909 00:27:20.692390 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.692733 kubelet[2764]: E0909 00:27:20.692709 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.692733 kubelet[2764]: W0909 00:27:20.692725 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.692818 kubelet[2764]: E0909 00:27:20.692737 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.693059 kubelet[2764]: E0909 00:27:20.693034 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.693059 kubelet[2764]: W0909 00:27:20.693051 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.693131 kubelet[2764]: E0909 00:27:20.693084 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.693421 kubelet[2764]: E0909 00:27:20.693396 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.693478 kubelet[2764]: W0909 00:27:20.693418 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.693478 kubelet[2764]: E0909 00:27:20.693451 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.694139 kubelet[2764]: E0909 00:27:20.694084 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.694139 kubelet[2764]: W0909 00:27:20.694104 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.694139 kubelet[2764]: E0909 00:27:20.694116 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.694427 kubelet[2764]: E0909 00:27:20.694400 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.694482 kubelet[2764]: W0909 00:27:20.694455 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.694482 kubelet[2764]: E0909 00:27:20.694469 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.694989 kubelet[2764]: E0909 00:27:20.694963 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.694989 kubelet[2764]: W0909 00:27:20.694983 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.695067 kubelet[2764]: E0909 00:27:20.694995 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.695585 kubelet[2764]: E0909 00:27:20.695538 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.695585 kubelet[2764]: W0909 00:27:20.695551 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.695585 kubelet[2764]: E0909 00:27:20.695565 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.695967 kubelet[2764]: E0909 00:27:20.695945 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.695967 kubelet[2764]: W0909 00:27:20.695961 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.696061 kubelet[2764]: E0909 00:27:20.695972 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.696523 kubelet[2764]: E0909 00:27:20.696474 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.696523 kubelet[2764]: W0909 00:27:20.696493 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.696618 kubelet[2764]: E0909 00:27:20.696539 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.697005 kubelet[2764]: E0909 00:27:20.696981 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.697058 kubelet[2764]: W0909 00:27:20.697026 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.697058 kubelet[2764]: E0909 00:27:20.697041 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.697619 kubelet[2764]: E0909 00:27:20.697597 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.697619 kubelet[2764]: W0909 00:27:20.697612 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.697693 kubelet[2764]: E0909 00:27:20.697623 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:20.698040 kubelet[2764]: E0909 00:27:20.698020 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:20.698102 kubelet[2764]: W0909 00:27:20.698047 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:20.698102 kubelet[2764]: E0909 00:27:20.698059 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.617645 kubelet[2764]: I0909 00:27:21.617601 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:27:21.618134 kubelet[2764]: E0909 00:27:21.618076 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:21.680885 kubelet[2764]: E0909 00:27:21.680835 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.680885 kubelet[2764]: W0909 00:27:21.680870 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.680885 kubelet[2764]: E0909 00:27:21.680900 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.681297 kubelet[2764]: E0909 00:27:21.681216 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.681297 kubelet[2764]: W0909 00:27:21.681242 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.681297 kubelet[2764]: E0909 00:27:21.681254 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.681535 kubelet[2764]: E0909 00:27:21.681494 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.681602 kubelet[2764]: W0909 00:27:21.681579 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.681602 kubelet[2764]: E0909 00:27:21.681592 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.682037 kubelet[2764]: E0909 00:27:21.681949 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.682037 kubelet[2764]: W0909 00:27:21.681961 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.682037 kubelet[2764]: E0909 00:27:21.681972 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.682267 kubelet[2764]: E0909 00:27:21.682215 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.682267 kubelet[2764]: W0909 00:27:21.682239 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.682267 kubelet[2764]: E0909 00:27:21.682250 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.682542 kubelet[2764]: E0909 00:27:21.682516 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.682542 kubelet[2764]: W0909 00:27:21.682532 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.682542 kubelet[2764]: E0909 00:27:21.682541 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.682815 kubelet[2764]: E0909 00:27:21.682720 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.682815 kubelet[2764]: W0909 00:27:21.682733 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.682815 kubelet[2764]: E0909 00:27:21.682741 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.682911 kubelet[2764]: E0909 00:27:21.682904 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.682937 kubelet[2764]: W0909 00:27:21.682912 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.682937 kubelet[2764]: E0909 00:27:21.682921 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.683118 kubelet[2764]: E0909 00:27:21.683101 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.683118 kubelet[2764]: W0909 00:27:21.683113 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.683207 kubelet[2764]: E0909 00:27:21.683124 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.683323 kubelet[2764]: E0909 00:27:21.683305 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.683323 kubelet[2764]: W0909 00:27:21.683317 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.683323 kubelet[2764]: E0909 00:27:21.683325 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.683563 kubelet[2764]: E0909 00:27:21.683544 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.683563 kubelet[2764]: W0909 00:27:21.683557 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.683563 kubelet[2764]: E0909 00:27:21.683567 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.683737 kubelet[2764]: E0909 00:27:21.683719 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.683737 kubelet[2764]: W0909 00:27:21.683731 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.683737 kubelet[2764]: E0909 00:27:21.683738 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.683943 kubelet[2764]: E0909 00:27:21.683927 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.683943 kubelet[2764]: W0909 00:27:21.683935 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.683998 kubelet[2764]: E0909 00:27:21.683943 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.684170 kubelet[2764]: E0909 00:27:21.684154 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.684170 kubelet[2764]: W0909 00:27:21.684165 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.684251 kubelet[2764]: E0909 00:27:21.684175 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.684391 kubelet[2764]: E0909 00:27:21.684372 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.684391 kubelet[2764]: W0909 00:27:21.684385 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.684480 kubelet[2764]: E0909 00:27:21.684396 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.699055 kubelet[2764]: E0909 00:27:21.699011 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.699055 kubelet[2764]: W0909 00:27:21.699040 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.699055 kubelet[2764]: E0909 00:27:21.699061 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.699485 kubelet[2764]: E0909 00:27:21.699281 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.699485 kubelet[2764]: W0909 00:27:21.699295 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.699485 kubelet[2764]: E0909 00:27:21.699303 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.699643 kubelet[2764]: E0909 00:27:21.699621 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.699643 kubelet[2764]: W0909 00:27:21.699639 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.699694 kubelet[2764]: E0909 00:27:21.699654 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.699958 kubelet[2764]: E0909 00:27:21.699929 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.699958 kubelet[2764]: W0909 00:27:21.699953 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.700047 kubelet[2764]: E0909 00:27:21.699969 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.700283 kubelet[2764]: E0909 00:27:21.700264 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.700283 kubelet[2764]: W0909 00:27:21.700281 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.700352 kubelet[2764]: E0909 00:27:21.700294 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.700609 kubelet[2764]: E0909 00:27:21.700577 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.700609 kubelet[2764]: W0909 00:27:21.700593 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.700609 kubelet[2764]: E0909 00:27:21.700605 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.723467 kubelet[2764]: E0909 00:27:21.700881 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.723467 kubelet[2764]: W0909 00:27:21.700891 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.723467 kubelet[2764]: E0909 00:27:21.700902 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.723467 kubelet[2764]: E0909 00:27:21.701099 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.723467 kubelet[2764]: W0909 00:27:21.701109 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.723467 kubelet[2764]: E0909 00:27:21.701120 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.723467 kubelet[2764]: E0909 00:27:21.701327 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.723467 kubelet[2764]: W0909 00:27:21.701338 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.723467 kubelet[2764]: E0909 00:27:21.701348 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.723467 kubelet[2764]: E0909 00:27:21.701575 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.723856 kubelet[2764]: W0909 00:27:21.701584 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.723856 kubelet[2764]: E0909 00:27:21.701595 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.723856 kubelet[2764]: E0909 00:27:21.701809 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.723856 kubelet[2764]: W0909 00:27:21.701818 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.723856 kubelet[2764]: E0909 00:27:21.701828 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.723856 kubelet[2764]: E0909 00:27:21.702045 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.723856 kubelet[2764]: W0909 00:27:21.702058 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.723856 kubelet[2764]: E0909 00:27:21.702069 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.723856 kubelet[2764]: E0909 00:27:21.702457 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.723856 kubelet[2764]: W0909 00:27:21.702482 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.725124 kubelet[2764]: E0909 00:27:21.702540 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.725124 kubelet[2764]: E0909 00:27:21.702754 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.725124 kubelet[2764]: W0909 00:27:21.702762 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.725124 kubelet[2764]: E0909 00:27:21.702772 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.725124 kubelet[2764]: E0909 00:27:21.702963 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.725124 kubelet[2764]: W0909 00:27:21.702971 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.725124 kubelet[2764]: E0909 00:27:21.702979 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.725124 kubelet[2764]: E0909 00:27:21.703157 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.725124 kubelet[2764]: W0909 00:27:21.703168 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.725124 kubelet[2764]: E0909 00:27:21.703183 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.725557 kubelet[2764]: E0909 00:27:21.703481 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.725557 kubelet[2764]: W0909 00:27:21.703491 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.725557 kubelet[2764]: E0909 00:27:21.703539 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:21.725557 kubelet[2764]: E0909 00:27:21.704082 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:27:21.725557 kubelet[2764]: W0909 00:27:21.704098 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:27:21.725557 kubelet[2764]: E0909 00:27:21.704110 2764 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:27:22.028481 containerd[1569]: time="2025-09-09T00:27:22.028231965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:22.031436 containerd[1569]: time="2025-09-09T00:27:22.031089881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:27:22.033839 containerd[1569]: time="2025-09-09T00:27:22.033682107Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:22.037450 containerd[1569]: time="2025-09-09T00:27:22.037319577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:22.038356 containerd[1569]: time="2025-09-09T00:27:22.038284428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.010286283s" Sep 9 00:27:22.038356 containerd[1569]: time="2025-09-09T00:27:22.038334583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:27:22.045958 containerd[1569]: time="2025-09-09T00:27:22.045873967Z" level=info msg="CreateContainer within sandbox \"8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:27:22.067990 containerd[1569]: time="2025-09-09T00:27:22.067674265Z" level=info msg="Container 68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:27:22.086691 containerd[1569]: time="2025-09-09T00:27:22.086619263Z" level=info msg="CreateContainer within sandbox \"8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c\"" Sep 9 00:27:22.087589 containerd[1569]: time="2025-09-09T00:27:22.087466193Z" level=info msg="StartContainer for \"68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c\"" Sep 9 00:27:22.089584 containerd[1569]: time="2025-09-09T00:27:22.089500072Z" level=info msg="connecting to shim 68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c" address="unix:///run/containerd/s/bd0e3d99169cf6b9442bd0cdfe1c87a0cda3f6e1e73e4fe87b6336fb6a98f031" protocol=ttrpc version=3 Sep 9 00:27:22.123344 systemd[1]: Started cri-containerd-68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c.scope - libcontainer container 68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c. Sep 9 00:27:22.195845 systemd[1]: cri-containerd-68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c.scope: Deactivated successfully. Sep 9 00:27:22.197968 containerd[1569]: time="2025-09-09T00:27:22.197928869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c\" id:\"68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c\" pid:3447 exited_at:{seconds:1757377642 nanos:197026003}" Sep 9 00:27:22.519318 containerd[1569]: time="2025-09-09T00:27:22.519248022Z" level=info msg="received exit event container_id:\"68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c\" id:\"68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c\" pid:3447 exited_at:{seconds:1757377642 nanos:197026003}" Sep 9 00:27:22.521676 containerd[1569]: time="2025-09-09T00:27:22.521642077Z" level=info msg="StartContainer for \"68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c\" returns successfully" Sep 9 00:27:22.525273 kubelet[2764]: E0909 00:27:22.525166 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:22.551518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68973fa5e647e9e395d8a33108e56cede7606ae730a9913bca55ff7db6527c2c-rootfs.mount: Deactivated successfully. Sep 9 00:27:22.664572 containerd[1569]: time="2025-09-09T00:27:22.664481598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:27:24.525983 kubelet[2764]: E0909 00:27:24.525885 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:26.043944 containerd[1569]: time="2025-09-09T00:27:26.043848497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:26.044672 containerd[1569]: time="2025-09-09T00:27:26.044622540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:27:26.046255 containerd[1569]: time="2025-09-09T00:27:26.046187607Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:26.048495 containerd[1569]: time="2025-09-09T00:27:26.048460223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:26.049203 containerd[1569]: time="2025-09-09T00:27:26.049176396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.384621681s" Sep 9 00:27:26.049247 containerd[1569]: time="2025-09-09T00:27:26.049209729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:27:26.054882 containerd[1569]: time="2025-09-09T00:27:26.054823876Z" level=info msg="CreateContainer within sandbox \"8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:27:26.066650 containerd[1569]: time="2025-09-09T00:27:26.066591813Z" level=info msg="Container 5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:27:26.078083 containerd[1569]: time="2025-09-09T00:27:26.078016526Z" level=info msg="CreateContainer within sandbox \"8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1\"" Sep 9 00:27:26.079205 containerd[1569]: time="2025-09-09T00:27:26.079174138Z" level=info msg="StartContainer for \"5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1\"" Sep 9 00:27:26.082214 containerd[1569]: time="2025-09-09T00:27:26.082146688Z" level=info msg="connecting to shim 5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1" address="unix:///run/containerd/s/bd0e3d99169cf6b9442bd0cdfe1c87a0cda3f6e1e73e4fe87b6336fb6a98f031" protocol=ttrpc version=3 Sep 9 00:27:26.111698 systemd[1]: Started cri-containerd-5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1.scope - libcontainer container 5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1. Sep 9 00:27:26.168772 containerd[1569]: time="2025-09-09T00:27:26.168705979Z" level=info msg="StartContainer for \"5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1\" returns successfully" Sep 9 00:27:26.526404 kubelet[2764]: E0909 00:27:26.526205 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:28.525777 kubelet[2764]: E0909 00:27:28.525689 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:28.780694 systemd[1]: cri-containerd-5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1.scope: Deactivated successfully. Sep 9 00:27:28.781182 systemd[1]: cri-containerd-5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1.scope: Consumed 645ms CPU time, 176M memory peak, 3.1M read from disk, 171.3M written to disk. Sep 9 00:27:28.782917 containerd[1569]: time="2025-09-09T00:27:28.782820239Z" level=info msg="received exit event container_id:\"5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1\" id:\"5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1\" pid:3505 exited_at:{seconds:1757377648 nanos:782383128}" Sep 9 00:27:28.782917 containerd[1569]: time="2025-09-09T00:27:28.782859803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1\" id:\"5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1\" pid:3505 exited_at:{seconds:1757377648 nanos:782383128}" Sep 9 00:27:28.854410 kubelet[2764]: I0909 00:27:28.853578 2764 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:27:28.862403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5717d44956ee6dde2b6c092b42aef0dd27e41e859e52a0aa97f4d4fa57b692e1-rootfs.mount: Deactivated successfully. Sep 9 00:27:30.581329 systemd[1]: Created slice kubepods-besteffort-pod0a6cadb9_36e6_4cbb_bf0b_2c80c499a1bf.slice - libcontainer container kubepods-besteffort-pod0a6cadb9_36e6_4cbb_bf0b_2c80c499a1bf.slice. Sep 9 00:27:30.583910 containerd[1569]: time="2025-09-09T00:27:30.583859166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4gtb,Uid:0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:30.586604 systemd[1]: Created slice kubepods-besteffort-pod9c06e30f_3ca8_4205_96e1_882cd61294b1.slice - libcontainer container kubepods-besteffort-pod9c06e30f_3ca8_4205_96e1_882cd61294b1.slice. Sep 9 00:27:30.684964 kubelet[2764]: I0909 00:27:30.684899 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rzmk\" (UniqueName: \"kubernetes.io/projected/9c06e30f-3ca8-4205-96e1-882cd61294b1-kube-api-access-9rzmk\") pod \"calico-kube-controllers-7947cbcf4b-vr8w7\" (UID: \"9c06e30f-3ca8-4205-96e1-882cd61294b1\") " pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" Sep 9 00:27:30.684964 kubelet[2764]: I0909 00:27:30.684964 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c06e30f-3ca8-4205-96e1-882cd61294b1-tigera-ca-bundle\") pod \"calico-kube-controllers-7947cbcf4b-vr8w7\" (UID: \"9c06e30f-3ca8-4205-96e1-882cd61294b1\") " pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" Sep 9 00:27:30.890215 containerd[1569]: time="2025-09-09T00:27:30.890059244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947cbcf4b-vr8w7,Uid:9c06e30f-3ca8-4205-96e1-882cd61294b1,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:31.097144 systemd[1]: Created slice kubepods-burstable-pod8da125d6_0b64_44f6_a7b4_cbc14725e524.slice - libcontainer container kubepods-burstable-pod8da125d6_0b64_44f6_a7b4_cbc14725e524.slice. Sep 9 00:27:31.189655 kubelet[2764]: I0909 00:27:31.189447 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8da125d6-0b64-44f6-a7b4-cbc14725e524-config-volume\") pod \"coredns-674b8bbfcf-cn528\" (UID: \"8da125d6-0b64-44f6-a7b4-cbc14725e524\") " pod="kube-system/coredns-674b8bbfcf-cn528" Sep 9 00:27:31.189655 kubelet[2764]: I0909 00:27:31.189501 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vz5\" (UniqueName: \"kubernetes.io/projected/8da125d6-0b64-44f6-a7b4-cbc14725e524-kube-api-access-68vz5\") pod \"coredns-674b8bbfcf-cn528\" (UID: \"8da125d6-0b64-44f6-a7b4-cbc14725e524\") " pod="kube-system/coredns-674b8bbfcf-cn528" Sep 9 00:27:31.334806 containerd[1569]: time="2025-09-09T00:27:31.334724385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:27:31.340778 systemd[1]: Created slice kubepods-besteffort-pod0af48b9c_f66e_4da9_994e_e74d6dd7e90d.slice - libcontainer container kubepods-besteffort-pod0af48b9c_f66e_4da9_994e_e74d6dd7e90d.slice. Sep 9 00:27:31.391718 kubelet[2764]: I0909 00:27:31.391647 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw69t\" (UniqueName: \"kubernetes.io/projected/0af48b9c-f66e-4da9-994e-e74d6dd7e90d-kube-api-access-dw69t\") pod \"calico-apiserver-ddcccdf47-j5f98\" (UID: \"0af48b9c-f66e-4da9-994e-e74d6dd7e90d\") " pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" Sep 9 00:27:31.391718 kubelet[2764]: I0909 00:27:31.391728 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0af48b9c-f66e-4da9-994e-e74d6dd7e90d-calico-apiserver-certs\") pod \"calico-apiserver-ddcccdf47-j5f98\" (UID: \"0af48b9c-f66e-4da9-994e-e74d6dd7e90d\") " pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" Sep 9 00:27:31.602216 systemd[1]: Created slice kubepods-besteffort-pod4429621a_a5cd_4e34_a55b_31610e55d85d.slice - libcontainer container kubepods-besteffort-pod4429621a_a5cd_4e34_a55b_31610e55d85d.slice. Sep 9 00:27:31.608107 containerd[1569]: time="2025-09-09T00:27:31.608041807Z" level=error msg="Failed to destroy network for sandbox \"31e186c00cc401c79d9d3aadbd32265c8fbe86d808962d1efd2d12718a57ea76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:31.610342 systemd[1]: run-netns-cni\x2df00b1ee6\x2d788a\x2d84a6\x2dd204\x2de3bc7948ce63.mount: Deactivated successfully. Sep 9 00:27:31.649245 containerd[1569]: time="2025-09-09T00:27:31.649182541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-j5f98,Uid:0af48b9c-f66e-4da9-994e-e74d6dd7e90d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:27:31.681267 containerd[1569]: time="2025-09-09T00:27:31.681167566Z" level=error msg="Failed to destroy network for sandbox \"cf40cd2bc3bdf9bb45c5d16b008645116b372ab1ed9b287429845245406b3b4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:31.693679 kubelet[2764]: I0909 00:27:31.693604 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4429621a-a5cd-4e34-a55b-31610e55d85d-config\") pod \"goldmane-54d579b49d-q52gn\" (UID: \"4429621a-a5cd-4e34-a55b-31610e55d85d\") " pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:31.694188 kubelet[2764]: I0909 00:27:31.693708 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zblrb\" (UniqueName: \"kubernetes.io/projected/4429621a-a5cd-4e34-a55b-31610e55d85d-kube-api-access-zblrb\") pod \"goldmane-54d579b49d-q52gn\" (UID: \"4429621a-a5cd-4e34-a55b-31610e55d85d\") " pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:31.694188 kubelet[2764]: I0909 00:27:31.693746 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4429621a-a5cd-4e34-a55b-31610e55d85d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-q52gn\" (UID: \"4429621a-a5cd-4e34-a55b-31610e55d85d\") " pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:31.694188 kubelet[2764]: I0909 00:27:31.693767 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4429621a-a5cd-4e34-a55b-31610e55d85d-goldmane-key-pair\") pod \"goldmane-54d579b49d-q52gn\" (UID: \"4429621a-a5cd-4e34-a55b-31610e55d85d\") " pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:31.703881 kubelet[2764]: E0909 00:27:31.703811 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:31.704549 containerd[1569]: time="2025-09-09T00:27:31.704482614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn528,Uid:8da125d6-0b64-44f6-a7b4-cbc14725e524,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:31.861107 systemd[1]: run-netns-cni\x2d8cd72e1c\x2de80e\x2dbb11\x2d2f38\x2d0d11e3d3694d.mount: Deactivated successfully. Sep 9 00:27:31.913163 systemd[1]: Created slice kubepods-besteffort-podafcc42bb_db92_4dd9_85b7_d4cf431a9e03.slice - libcontainer container kubepods-besteffort-podafcc42bb_db92_4dd9_85b7_d4cf431a9e03.slice. Sep 9 00:27:31.996031 kubelet[2764]: I0909 00:27:31.995938 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt92p\" (UniqueName: \"kubernetes.io/projected/afcc42bb-db92-4dd9-85b7-d4cf431a9e03-kube-api-access-jt92p\") pod \"calico-apiserver-ddcccdf47-p9srl\" (UID: \"afcc42bb-db92-4dd9-85b7-d4cf431a9e03\") " pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" Sep 9 00:27:31.996031 kubelet[2764]: I0909 00:27:31.996029 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/afcc42bb-db92-4dd9-85b7-d4cf431a9e03-calico-apiserver-certs\") pod \"calico-apiserver-ddcccdf47-p9srl\" (UID: \"afcc42bb-db92-4dd9-85b7-d4cf431a9e03\") " pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" Sep 9 00:27:32.232399 systemd[1]: Created slice kubepods-burstable-pod734cf28b_2429_47be_8f5f_838bba2bec22.slice - libcontainer container kubepods-burstable-pod734cf28b_2429_47be_8f5f_838bba2bec22.slice. Sep 9 00:27:32.299147 kubelet[2764]: I0909 00:27:32.299038 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/734cf28b-2429-47be-8f5f-838bba2bec22-config-volume\") pod \"coredns-674b8bbfcf-fzvbd\" (UID: \"734cf28b-2429-47be-8f5f-838bba2bec22\") " pod="kube-system/coredns-674b8bbfcf-fzvbd" Sep 9 00:27:32.299147 kubelet[2764]: I0909 00:27:32.299116 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvd2h\" (UniqueName: \"kubernetes.io/projected/734cf28b-2429-47be-8f5f-838bba2bec22-kube-api-access-mvd2h\") pod \"coredns-674b8bbfcf-fzvbd\" (UID: \"734cf28b-2429-47be-8f5f-838bba2bec22\") " pod="kube-system/coredns-674b8bbfcf-fzvbd" Sep 9 00:27:32.424862 containerd[1569]: time="2025-09-09T00:27:32.424730421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4gtb,Uid:0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e186c00cc401c79d9d3aadbd32265c8fbe86d808962d1efd2d12718a57ea76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:32.506759 containerd[1569]: time="2025-09-09T00:27:32.506597585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-q52gn,Uid:4429621a-a5cd-4e34-a55b-31610e55d85d,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:32.517360 containerd[1569]: time="2025-09-09T00:27:32.517293604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-p9srl,Uid:afcc42bb-db92-4dd9-85b7-d4cf431a9e03,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:27:32.535618 kubelet[2764]: E0909 00:27:32.535550 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:32.536275 containerd[1569]: time="2025-09-09T00:27:32.536197112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzvbd,Uid:734cf28b-2429-47be-8f5f-838bba2bec22,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:32.619796 kubelet[2764]: E0909 00:27:32.619708 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e186c00cc401c79d9d3aadbd32265c8fbe86d808962d1efd2d12718a57ea76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:32.620418 kubelet[2764]: E0909 00:27:32.620390 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e186c00cc401c79d9d3aadbd32265c8fbe86d808962d1efd2d12718a57ea76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:32.620471 kubelet[2764]: E0909 00:27:32.620426 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e186c00cc401c79d9d3aadbd32265c8fbe86d808962d1efd2d12718a57ea76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:32.620577 kubelet[2764]: E0909 00:27:32.620541 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t4gtb_calico-system(0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t4gtb_calico-system(0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31e186c00cc401c79d9d3aadbd32265c8fbe86d808962d1efd2d12718a57ea76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:32.761165 systemd[1]: Created slice kubepods-besteffort-podce1387d1_6053_4a63_8e97_31b286a237bd.slice - libcontainer container kubepods-besteffort-podce1387d1_6053_4a63_8e97_31b286a237bd.slice. Sep 9 00:27:32.802609 kubelet[2764]: I0909 00:27:32.802488 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-backend-key-pair\") pod \"whisker-687598f668-555cg\" (UID: \"ce1387d1-6053-4a63-8e97-31b286a237bd\") " pod="calico-system/whisker-687598f668-555cg" Sep 9 00:27:32.802609 kubelet[2764]: I0909 00:27:32.802579 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-ca-bundle\") pod \"whisker-687598f668-555cg\" (UID: \"ce1387d1-6053-4a63-8e97-31b286a237bd\") " pod="calico-system/whisker-687598f668-555cg" Sep 9 00:27:32.802609 kubelet[2764]: I0909 00:27:32.802611 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9478v\" (UniqueName: \"kubernetes.io/projected/ce1387d1-6053-4a63-8e97-31b286a237bd-kube-api-access-9478v\") pod \"whisker-687598f668-555cg\" (UID: \"ce1387d1-6053-4a63-8e97-31b286a237bd\") " pod="calico-system/whisker-687598f668-555cg" Sep 9 00:27:32.881941 containerd[1569]: time="2025-09-09T00:27:32.881806459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947cbcf4b-vr8w7,Uid:9c06e30f-3ca8-4205-96e1-882cd61294b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf40cd2bc3bdf9bb45c5d16b008645116b372ab1ed9b287429845245406b3b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:32.882642 kubelet[2764]: E0909 00:27:32.882188 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf40cd2bc3bdf9bb45c5d16b008645116b372ab1ed9b287429845245406b3b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:32.882642 kubelet[2764]: E0909 00:27:32.882276 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf40cd2bc3bdf9bb45c5d16b008645116b372ab1ed9b287429845245406b3b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" Sep 9 00:27:32.882642 kubelet[2764]: E0909 00:27:32.882308 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf40cd2bc3bdf9bb45c5d16b008645116b372ab1ed9b287429845245406b3b4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" Sep 9 00:27:32.882776 kubelet[2764]: E0909 00:27:32.882378 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7947cbcf4b-vr8w7_calico-system(9c06e30f-3ca8-4205-96e1-882cd61294b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7947cbcf4b-vr8w7_calico-system(9c06e30f-3ca8-4205-96e1-882cd61294b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf40cd2bc3bdf9bb45c5d16b008645116b372ab1ed9b287429845245406b3b4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" podUID="9c06e30f-3ca8-4205-96e1-882cd61294b1" Sep 9 00:27:33.365901 containerd[1569]: time="2025-09-09T00:27:33.365831023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687598f668-555cg,Uid:ce1387d1-6053-4a63-8e97-31b286a237bd,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:33.532838 containerd[1569]: time="2025-09-09T00:27:33.532749263Z" level=error msg="Failed to destroy network for sandbox \"e1043736d8e5c80ece57950a6a71d0d6c6066e315a1ef7deddb4154c0cd7ffc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.788376 containerd[1569]: time="2025-09-09T00:27:33.788280700Z" level=error msg="Failed to destroy network for sandbox \"8784211d9ca6727f1ee6238eb616be9e4f40d3e687885dee6c4076bee285c160\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.834584 containerd[1569]: time="2025-09-09T00:27:33.834484999Z" level=error msg="Failed to destroy network for sandbox \"7e556e7dee5f0a2a7319d252ca8524a558e90730d506f1932a20ab2c57bc1e10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.910003 systemd[1]: run-netns-cni\x2d7c8fa709\x2d6ac8\x2d8323\x2dcd97\x2d1c3cb6434357.mount: Deactivated successfully. Sep 9 00:27:33.910153 systemd[1]: run-netns-cni\x2d476b63bc\x2d0476\x2d3777\x2dc3fb\x2d1551412c591a.mount: Deactivated successfully. Sep 9 00:27:33.910242 systemd[1]: run-netns-cni\x2d5fc52319\x2d7ed2\x2d33d4\x2d924d\x2d7aca933073a6.mount: Deactivated successfully. Sep 9 00:27:33.920206 containerd[1569]: time="2025-09-09T00:27:33.920071515Z" level=error msg="Failed to destroy network for sandbox \"5ac25f1661d94ce1c53db449a89d1512572e635eea471b9353e411052b75b003\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.923839 systemd[1]: run-netns-cni\x2d42750204\x2dda74\x2de791\x2dabfa\x2df3ec13ecbe3a.mount: Deactivated successfully. Sep 9 00:27:33.986239 containerd[1569]: time="2025-09-09T00:27:33.986168906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-j5f98,Uid:0af48b9c-f66e-4da9-994e-e74d6dd7e90d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1043736d8e5c80ece57950a6a71d0d6c6066e315a1ef7deddb4154c0cd7ffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.986636 kubelet[2764]: E0909 00:27:33.986581 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1043736d8e5c80ece57950a6a71d0d6c6066e315a1ef7deddb4154c0cd7ffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.987240 kubelet[2764]: E0909 00:27:33.986672 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1043736d8e5c80ece57950a6a71d0d6c6066e315a1ef7deddb4154c0cd7ffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" Sep 9 00:27:33.987240 kubelet[2764]: E0909 00:27:33.986699 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1043736d8e5c80ece57950a6a71d0d6c6066e315a1ef7deddb4154c0cd7ffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" Sep 9 00:27:33.987240 kubelet[2764]: E0909 00:27:33.986762 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddcccdf47-j5f98_calico-apiserver(0af48b9c-f66e-4da9-994e-e74d6dd7e90d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddcccdf47-j5f98_calico-apiserver(0af48b9c-f66e-4da9-994e-e74d6dd7e90d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1043736d8e5c80ece57950a6a71d0d6c6066e315a1ef7deddb4154c0cd7ffc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" podUID="0af48b9c-f66e-4da9-994e-e74d6dd7e90d" Sep 9 00:27:33.989816 containerd[1569]: time="2025-09-09T00:27:33.989755816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn528,Uid:8da125d6-0b64-44f6-a7b4-cbc14725e524,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8784211d9ca6727f1ee6238eb616be9e4f40d3e687885dee6c4076bee285c160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.990245 kubelet[2764]: E0909 00:27:33.990198 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8784211d9ca6727f1ee6238eb616be9e4f40d3e687885dee6c4076bee285c160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.990318 kubelet[2764]: E0909 00:27:33.990269 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8784211d9ca6727f1ee6238eb616be9e4f40d3e687885dee6c4076bee285c160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cn528" Sep 9 00:27:33.990318 kubelet[2764]: E0909 00:27:33.990297 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8784211d9ca6727f1ee6238eb616be9e4f40d3e687885dee6c4076bee285c160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cn528" Sep 9 00:27:33.990415 kubelet[2764]: E0909 00:27:33.990376 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cn528_kube-system(8da125d6-0b64-44f6-a7b4-cbc14725e524)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cn528_kube-system(8da125d6-0b64-44f6-a7b4-cbc14725e524)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8784211d9ca6727f1ee6238eb616be9e4f40d3e687885dee6c4076bee285c160\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cn528" podUID="8da125d6-0b64-44f6-a7b4-cbc14725e524" Sep 9 00:27:33.993712 containerd[1569]: time="2025-09-09T00:27:33.993639723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-q52gn,Uid:4429621a-a5cd-4e34-a55b-31610e55d85d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e556e7dee5f0a2a7319d252ca8524a558e90730d506f1932a20ab2c57bc1e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.994463 kubelet[2764]: E0909 00:27:33.994406 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e556e7dee5f0a2a7319d252ca8524a558e90730d506f1932a20ab2c57bc1e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:33.994776 kubelet[2764]: E0909 00:27:33.994496 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e556e7dee5f0a2a7319d252ca8524a558e90730d506f1932a20ab2c57bc1e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:33.994776 kubelet[2764]: E0909 00:27:33.994729 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e556e7dee5f0a2a7319d252ca8524a558e90730d506f1932a20ab2c57bc1e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:33.995560 kubelet[2764]: E0909 00:27:33.994826 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-q52gn_calico-system(4429621a-a5cd-4e34-a55b-31610e55d85d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-q52gn_calico-system(4429621a-a5cd-4e34-a55b-31610e55d85d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e556e7dee5f0a2a7319d252ca8524a558e90730d506f1932a20ab2c57bc1e10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-q52gn" podUID="4429621a-a5cd-4e34-a55b-31610e55d85d" Sep 9 00:27:34.014449 containerd[1569]: time="2025-09-09T00:27:34.014369585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-p9srl,Uid:afcc42bb-db92-4dd9-85b7-d4cf431a9e03,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac25f1661d94ce1c53db449a89d1512572e635eea471b9353e411052b75b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.015669 kubelet[2764]: E0909 00:27:34.015031 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac25f1661d94ce1c53db449a89d1512572e635eea471b9353e411052b75b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.015669 kubelet[2764]: E0909 00:27:34.015121 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac25f1661d94ce1c53db449a89d1512572e635eea471b9353e411052b75b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" Sep 9 00:27:34.015669 kubelet[2764]: E0909 00:27:34.015149 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ac25f1661d94ce1c53db449a89d1512572e635eea471b9353e411052b75b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" Sep 9 00:27:34.016004 kubelet[2764]: E0909 00:27:34.015216 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddcccdf47-p9srl_calico-apiserver(afcc42bb-db92-4dd9-85b7-d4cf431a9e03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddcccdf47-p9srl_calico-apiserver(afcc42bb-db92-4dd9-85b7-d4cf431a9e03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ac25f1661d94ce1c53db449a89d1512572e635eea471b9353e411052b75b003\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" podUID="afcc42bb-db92-4dd9-85b7-d4cf431a9e03" Sep 9 00:27:34.052126 containerd[1569]: time="2025-09-09T00:27:34.051845955Z" level=error msg="Failed to destroy network for sandbox \"f22acc924121c5853be3db9e186462c98ea2197360c11bc57e3e69145262ddf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.056372 containerd[1569]: time="2025-09-09T00:27:34.056319548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzvbd,Uid:734cf28b-2429-47be-8f5f-838bba2bec22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22acc924121c5853be3db9e186462c98ea2197360c11bc57e3e69145262ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.057034 kubelet[2764]: E0909 00:27:34.056980 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22acc924121c5853be3db9e186462c98ea2197360c11bc57e3e69145262ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.057103 kubelet[2764]: E0909 00:27:34.057068 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22acc924121c5853be3db9e186462c98ea2197360c11bc57e3e69145262ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fzvbd" Sep 9 00:27:34.057140 kubelet[2764]: E0909 00:27:34.057108 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22acc924121c5853be3db9e186462c98ea2197360c11bc57e3e69145262ddf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fzvbd" Sep 9 00:27:34.057181 systemd[1]: run-netns-cni\x2d23e2af37\x2dc422\x2dc5a3\x2ddac5\x2df97f38a9df8a.mount: Deactivated successfully. Sep 9 00:27:34.057271 kubelet[2764]: E0909 00:27:34.057185 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fzvbd_kube-system(734cf28b-2429-47be-8f5f-838bba2bec22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fzvbd_kube-system(734cf28b-2429-47be-8f5f-838bba2bec22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f22acc924121c5853be3db9e186462c98ea2197360c11bc57e3e69145262ddf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fzvbd" podUID="734cf28b-2429-47be-8f5f-838bba2bec22" Sep 9 00:27:34.081596 containerd[1569]: time="2025-09-09T00:27:34.081392969Z" level=error msg="Failed to destroy network for sandbox \"7d4d670c197da8d7641f27123cc9f160d0da648199547b9ea76bae002fb63d09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.085200 systemd[1]: run-netns-cni\x2d468c4e86\x2db146\x2ddf46\x2d6fa7\x2dcb3c63651f51.mount: Deactivated successfully. Sep 9 00:27:34.086283 containerd[1569]: time="2025-09-09T00:27:34.086217350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687598f668-555cg,Uid:ce1387d1-6053-4a63-8e97-31b286a237bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4d670c197da8d7641f27123cc9f160d0da648199547b9ea76bae002fb63d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.086674 kubelet[2764]: E0909 00:27:34.086607 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4d670c197da8d7641f27123cc9f160d0da648199547b9ea76bae002fb63d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:34.086770 kubelet[2764]: E0909 00:27:34.086695 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4d670c197da8d7641f27123cc9f160d0da648199547b9ea76bae002fb63d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-687598f668-555cg" Sep 9 00:27:34.086770 kubelet[2764]: E0909 00:27:34.086731 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4d670c197da8d7641f27123cc9f160d0da648199547b9ea76bae002fb63d09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-687598f668-555cg" Sep 9 00:27:34.086834 kubelet[2764]: E0909 00:27:34.086794 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-687598f668-555cg_calico-system(ce1387d1-6053-4a63-8e97-31b286a237bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-687598f668-555cg_calico-system(ce1387d1-6053-4a63-8e97-31b286a237bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d4d670c197da8d7641f27123cc9f160d0da648199547b9ea76bae002fb63d09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-687598f668-555cg" podUID="ce1387d1-6053-4a63-8e97-31b286a237bd" Sep 9 00:27:35.164759 kubelet[2764]: I0909 00:27:35.164659 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:27:35.168899 kubelet[2764]: E0909 00:27:35.168852 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:35.925499 kubelet[2764]: E0909 00:27:35.925434 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:43.339178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829374955.mount: Deactivated successfully. Sep 9 00:27:44.526610 kubelet[2764]: E0909 00:27:44.526457 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:44.527615 containerd[1569]: time="2025-09-09T00:27:44.527070343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687598f668-555cg,Uid:ce1387d1-6053-4a63-8e97-31b286a237bd,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:44.527615 containerd[1569]: time="2025-09-09T00:27:44.527088107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4gtb,Uid:0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:44.528702 containerd[1569]: time="2025-09-09T00:27:44.528635670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzvbd,Uid:734cf28b-2429-47be-8f5f-838bba2bec22,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:46.526212 containerd[1569]: time="2025-09-09T00:27:46.526148233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-p9srl,Uid:afcc42bb-db92-4dd9-85b7-d4cf431a9e03,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:27:47.061787 containerd[1569]: time="2025-09-09T00:27:46.534600990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn528,Uid:8da125d6-0b64-44f6-a7b4-cbc14725e524,Namespace:kube-system,Attempt:0,}" Sep 9 00:27:47.061787 containerd[1569]: time="2025-09-09T00:27:46.534755227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947cbcf4b-vr8w7,Uid:9c06e30f-3ca8-4205-96e1-882cd61294b1,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:47.061897 kubelet[2764]: E0909 00:27:46.526274 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:27:47.527261 containerd[1569]: time="2025-09-09T00:27:47.527106705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-j5f98,Uid:0af48b9c-f66e-4da9-994e-e74d6dd7e90d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:27:47.527261 containerd[1569]: time="2025-09-09T00:27:47.527107567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-q52gn,Uid:4429621a-a5cd-4e34-a55b-31610e55d85d,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:48.225407 containerd[1569]: time="2025-09-09T00:27:48.225326554Z" level=error msg="Failed to destroy network for sandbox \"a15903957aabfc0f2887231068a2c91ba2bb8e7c9aa5d742d7de5ecf3b691a37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:48.285798 containerd[1569]: time="2025-09-09T00:27:48.285694480Z" level=error msg="Failed to destroy network for sandbox \"8c385b01a90e47cf2ed135524921e416e2b42536dcf1c7ee1d66ad303325cc42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:48.368745 containerd[1569]: time="2025-09-09T00:27:48.368632300Z" level=error msg="Failed to destroy network for sandbox \"1c68e2dff2eb4b96118a7df7ade900c0c559cd835d2170089ca2030ce1c3ada6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:48.427071 containerd[1569]: time="2025-09-09T00:27:48.427003538Z" level=error msg="Failed to destroy network for sandbox \"b7a50c56be37b25b608fa4ae633f5e3ca07606c82f6b47bab6574142a77a8ee8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:48.514881 containerd[1569]: time="2025-09-09T00:27:48.514711606Z" level=error msg="Failed to destroy network for sandbox \"ec0d2c285dab88ac5812430c3f06ee440f92148043f27466eda4fd1819fb3278\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:48.571992 systemd[1]: run-netns-cni\x2dc01de5d6\x2d8564\x2d0a03\x2d24be\x2d2fd0d25f4391.mount: Deactivated successfully. Sep 9 00:27:48.572122 systemd[1]: run-netns-cni\x2d84dfb599\x2d857f\x2dbec0\x2d76db\x2d88c2d2e98f4d.mount: Deactivated successfully. Sep 9 00:27:48.572224 systemd[1]: run-netns-cni\x2d1369908c\x2dea01\x2dac09\x2d51f5\x2d1ce1575e3211.mount: Deactivated successfully. Sep 9 00:27:48.572313 systemd[1]: run-netns-cni\x2d5c26ed50\x2dffba\x2d7d3d\x2d23d1\x2d6a46f5599b11.mount: Deactivated successfully. Sep 9 00:27:48.572402 systemd[1]: run-netns-cni\x2d1152d4ca\x2d8a01\x2d77d8\x2de095\x2d695dcf86aa24.mount: Deactivated successfully. Sep 9 00:27:48.866559 containerd[1569]: time="2025-09-09T00:27:48.866473874Z" level=error msg="Failed to destroy network for sandbox \"03128c0d73749e84f81c0cc3a1b9f21a311293e2aca07e301f018ad41a6322f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:48.869729 systemd[1]: run-netns-cni\x2d651a4b0a\x2de62f\x2de5e8\x2d270a\x2df19cfb610104.mount: Deactivated successfully. Sep 9 00:27:49.087145 containerd[1569]: time="2025-09-09T00:27:49.086703713Z" level=error msg="Failed to destroy network for sandbox \"4d2dc6ba6c4736b611f3adffdff21417bb618443c58b50530d34c09f25339c3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.089966 systemd[1]: run-netns-cni\x2de87bab39\x2db598\x2de1b3\x2d80e5\x2dca8ad773a46e.mount: Deactivated successfully. Sep 9 00:27:49.103446 containerd[1569]: time="2025-09-09T00:27:49.103357688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-687598f668-555cg,Uid:ce1387d1-6053-4a63-8e97-31b286a237bd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15903957aabfc0f2887231068a2c91ba2bb8e7c9aa5d742d7de5ecf3b691a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.103797 kubelet[2764]: E0909 00:27:49.103722 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15903957aabfc0f2887231068a2c91ba2bb8e7c9aa5d742d7de5ecf3b691a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.104189 kubelet[2764]: E0909 00:27:49.103846 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15903957aabfc0f2887231068a2c91ba2bb8e7c9aa5d742d7de5ecf3b691a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-687598f668-555cg" Sep 9 00:27:49.104189 kubelet[2764]: E0909 00:27:49.103878 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a15903957aabfc0f2887231068a2c91ba2bb8e7c9aa5d742d7de5ecf3b691a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-687598f668-555cg" Sep 9 00:27:49.104189 kubelet[2764]: E0909 00:27:49.103958 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-687598f668-555cg_calico-system(ce1387d1-6053-4a63-8e97-31b286a237bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-687598f668-555cg_calico-system(ce1387d1-6053-4a63-8e97-31b286a237bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a15903957aabfc0f2887231068a2c91ba2bb8e7c9aa5d742d7de5ecf3b691a37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-687598f668-555cg" podUID="ce1387d1-6053-4a63-8e97-31b286a237bd" Sep 9 00:27:49.126807 containerd[1569]: time="2025-09-09T00:27:49.126531113Z" level=error msg="Failed to destroy network for sandbox \"74ca85bc86193884211aa4927fe7d1fb24b68e36d10e4e8e9592f1202f41ecf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.129659 systemd[1]: run-netns-cni\x2dffe63811\x2d0727\x2dabda\x2da518\x2ddf23e0e58913.mount: Deactivated successfully. Sep 9 00:27:49.159097 containerd[1569]: time="2025-09-09T00:27:49.158997116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4gtb,Uid:0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c385b01a90e47cf2ed135524921e416e2b42536dcf1c7ee1d66ad303325cc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.159481 kubelet[2764]: E0909 00:27:49.159404 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c385b01a90e47cf2ed135524921e416e2b42536dcf1c7ee1d66ad303325cc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.159569 kubelet[2764]: E0909 00:27:49.159535 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c385b01a90e47cf2ed135524921e416e2b42536dcf1c7ee1d66ad303325cc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:49.159604 kubelet[2764]: E0909 00:27:49.159572 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c385b01a90e47cf2ed135524921e416e2b42536dcf1c7ee1d66ad303325cc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t4gtb" Sep 9 00:27:49.159693 kubelet[2764]: E0909 00:27:49.159652 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t4gtb_calico-system(0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t4gtb_calico-system(0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c385b01a90e47cf2ed135524921e416e2b42536dcf1c7ee1d66ad303325cc42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t4gtb" podUID="0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf" Sep 9 00:27:49.290265 containerd[1569]: time="2025-09-09T00:27:49.290151532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzvbd,Uid:734cf28b-2429-47be-8f5f-838bba2bec22,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68e2dff2eb4b96118a7df7ade900c0c559cd835d2170089ca2030ce1c3ada6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.290767 kubelet[2764]: E0909 00:27:49.290674 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68e2dff2eb4b96118a7df7ade900c0c559cd835d2170089ca2030ce1c3ada6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.290911 kubelet[2764]: E0909 00:27:49.290828 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68e2dff2eb4b96118a7df7ade900c0c559cd835d2170089ca2030ce1c3ada6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fzvbd" Sep 9 00:27:49.290911 kubelet[2764]: E0909 00:27:49.290898 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c68e2dff2eb4b96118a7df7ade900c0c559cd835d2170089ca2030ce1c3ada6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fzvbd" Sep 9 00:27:49.291116 kubelet[2764]: E0909 00:27:49.291017 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fzvbd_kube-system(734cf28b-2429-47be-8f5f-838bba2bec22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fzvbd_kube-system(734cf28b-2429-47be-8f5f-838bba2bec22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c68e2dff2eb4b96118a7df7ade900c0c559cd835d2170089ca2030ce1c3ada6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fzvbd" podUID="734cf28b-2429-47be-8f5f-838bba2bec22" Sep 9 00:27:49.356285 containerd[1569]: time="2025-09-09T00:27:49.356170945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-p9srl,Uid:afcc42bb-db92-4dd9-85b7-d4cf431a9e03,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a50c56be37b25b608fa4ae633f5e3ca07606c82f6b47bab6574142a77a8ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.356659 kubelet[2764]: E0909 00:27:49.356579 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a50c56be37b25b608fa4ae633f5e3ca07606c82f6b47bab6574142a77a8ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.356848 kubelet[2764]: E0909 00:27:49.356685 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a50c56be37b25b608fa4ae633f5e3ca07606c82f6b47bab6574142a77a8ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" Sep 9 00:27:49.356848 kubelet[2764]: E0909 00:27:49.356719 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7a50c56be37b25b608fa4ae633f5e3ca07606c82f6b47bab6574142a77a8ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" Sep 9 00:27:49.356848 kubelet[2764]: E0909 00:27:49.356786 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddcccdf47-p9srl_calico-apiserver(afcc42bb-db92-4dd9-85b7-d4cf431a9e03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddcccdf47-p9srl_calico-apiserver(afcc42bb-db92-4dd9-85b7-d4cf431a9e03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7a50c56be37b25b608fa4ae633f5e3ca07606c82f6b47bab6574142a77a8ee8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" podUID="afcc42bb-db92-4dd9-85b7-d4cf431a9e03" Sep 9 00:27:49.426799 containerd[1569]: time="2025-09-09T00:27:49.426576200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn528,Uid:8da125d6-0b64-44f6-a7b4-cbc14725e524,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0d2c285dab88ac5812430c3f06ee440f92148043f27466eda4fd1819fb3278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.427063 kubelet[2764]: E0909 00:27:49.426986 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0d2c285dab88ac5812430c3f06ee440f92148043f27466eda4fd1819fb3278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.427133 kubelet[2764]: E0909 00:27:49.427088 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0d2c285dab88ac5812430c3f06ee440f92148043f27466eda4fd1819fb3278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cn528" Sep 9 00:27:49.427133 kubelet[2764]: E0909 00:27:49.427120 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0d2c285dab88ac5812430c3f06ee440f92148043f27466eda4fd1819fb3278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cn528" Sep 9 00:27:49.427285 kubelet[2764]: E0909 00:27:49.427218 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cn528_kube-system(8da125d6-0b64-44f6-a7b4-cbc14725e524)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cn528_kube-system(8da125d6-0b64-44f6-a7b4-cbc14725e524)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec0d2c285dab88ac5812430c3f06ee440f92148043f27466eda4fd1819fb3278\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cn528" podUID="8da125d6-0b64-44f6-a7b4-cbc14725e524" Sep 9 00:27:49.427713 containerd[1569]: time="2025-09-09T00:27:49.427641305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947cbcf4b-vr8w7,Uid:9c06e30f-3ca8-4205-96e1-882cd61294b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03128c0d73749e84f81c0cc3a1b9f21a311293e2aca07e301f018ad41a6322f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.428008 kubelet[2764]: E0909 00:27:49.427952 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03128c0d73749e84f81c0cc3a1b9f21a311293e2aca07e301f018ad41a6322f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.428077 kubelet[2764]: E0909 00:27:49.428028 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03128c0d73749e84f81c0cc3a1b9f21a311293e2aca07e301f018ad41a6322f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" Sep 9 00:27:49.428077 kubelet[2764]: E0909 00:27:49.428052 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03128c0d73749e84f81c0cc3a1b9f21a311293e2aca07e301f018ad41a6322f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" Sep 9 00:27:49.428154 kubelet[2764]: E0909 00:27:49.428111 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7947cbcf4b-vr8w7_calico-system(9c06e30f-3ca8-4205-96e1-882cd61294b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7947cbcf4b-vr8w7_calico-system(9c06e30f-3ca8-4205-96e1-882cd61294b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03128c0d73749e84f81c0cc3a1b9f21a311293e2aca07e301f018ad41a6322f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" podUID="9c06e30f-3ca8-4205-96e1-882cd61294b1" Sep 9 00:27:49.540108 containerd[1569]: time="2025-09-09T00:27:49.539081241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:49.613404 containerd[1569]: time="2025-09-09T00:27:49.613265715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-j5f98,Uid:0af48b9c-f66e-4da9-994e-e74d6dd7e90d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2dc6ba6c4736b611f3adffdff21417bb618443c58b50530d34c09f25339c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.613739 kubelet[2764]: E0909 00:27:49.613662 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2dc6ba6c4736b611f3adffdff21417bb618443c58b50530d34c09f25339c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.613853 kubelet[2764]: E0909 00:27:49.613742 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2dc6ba6c4736b611f3adffdff21417bb618443c58b50530d34c09f25339c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" Sep 9 00:27:49.613853 kubelet[2764]: E0909 00:27:49.613778 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2dc6ba6c4736b611f3adffdff21417bb618443c58b50530d34c09f25339c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" Sep 9 00:27:49.613925 kubelet[2764]: E0909 00:27:49.613844 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ddcccdf47-j5f98_calico-apiserver(0af48b9c-f66e-4da9-994e-e74d6dd7e90d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ddcccdf47-j5f98_calico-apiserver(0af48b9c-f66e-4da9-994e-e74d6dd7e90d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d2dc6ba6c4736b611f3adffdff21417bb618443c58b50530d34c09f25339c3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" podUID="0af48b9c-f66e-4da9-994e-e74d6dd7e90d" Sep 9 00:27:49.656086 containerd[1569]: time="2025-09-09T00:27:49.655929132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-q52gn,Uid:4429621a-a5cd-4e34-a55b-31610e55d85d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ca85bc86193884211aa4927fe7d1fb24b68e36d10e4e8e9592f1202f41ecf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.656396 kubelet[2764]: E0909 00:27:49.656339 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ca85bc86193884211aa4927fe7d1fb24b68e36d10e4e8e9592f1202f41ecf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:27:49.656462 kubelet[2764]: E0909 00:27:49.656427 2764 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ca85bc86193884211aa4927fe7d1fb24b68e36d10e4e8e9592f1202f41ecf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:49.656523 kubelet[2764]: E0909 00:27:49.656459 2764 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74ca85bc86193884211aa4927fe7d1fb24b68e36d10e4e8e9592f1202f41ecf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-q52gn" Sep 9 00:27:49.656592 kubelet[2764]: E0909 00:27:49.656551 2764 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-q52gn_calico-system(4429621a-a5cd-4e34-a55b-31610e55d85d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-q52gn_calico-system(4429621a-a5cd-4e34-a55b-31610e55d85d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74ca85bc86193884211aa4927fe7d1fb24b68e36d10e4e8e9592f1202f41ecf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-q52gn" podUID="4429621a-a5cd-4e34-a55b-31610e55d85d" Sep 9 00:27:49.731241 containerd[1569]: time="2025-09-09T00:27:49.730993576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:27:49.774153 containerd[1569]: time="2025-09-09T00:27:49.774056189Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:49.799393 containerd[1569]: time="2025-09-09T00:27:49.799307274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:49.800016 containerd[1569]: time="2025-09-09T00:27:49.799952404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 18.465159879s" Sep 9 00:27:49.800072 containerd[1569]: time="2025-09-09T00:27:49.800016237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:27:49.925236 containerd[1569]: time="2025-09-09T00:27:49.925151413Z" level=info msg="CreateContainer within sandbox \"8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:27:49.956417 containerd[1569]: time="2025-09-09T00:27:49.956345833Z" level=info msg="Container 035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:27:50.105123 containerd[1569]: time="2025-09-09T00:27:50.104947571Z" level=info msg="CreateContainer within sandbox \"8ab49424b957a37e2fc1a8ccd77cfaafa3229842e29b532eda7156f1853b1b14\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3\"" Sep 9 00:27:50.107937 containerd[1569]: time="2025-09-09T00:27:50.105713842Z" level=info msg="StartContainer for \"035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3\"" Sep 9 00:27:50.107937 containerd[1569]: time="2025-09-09T00:27:50.107271431Z" level=info msg="connecting to shim 035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3" address="unix:///run/containerd/s/bd0e3d99169cf6b9442bd0cdfe1c87a0cda3f6e1e73e4fe87b6336fb6a98f031" protocol=ttrpc version=3 Sep 9 00:27:50.197789 systemd[1]: Started cri-containerd-035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3.scope - libcontainer container 035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3. Sep 9 00:27:50.538800 containerd[1569]: time="2025-09-09T00:27:50.538738812Z" level=info msg="StartContainer for \"035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3\" returns successfully" Sep 9 00:27:50.584670 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:27:50.585374 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:27:51.125537 containerd[1569]: time="2025-09-09T00:27:51.125454035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3\" id:\"e76ec8951efd7ed3fc0f63f526da3495bf874e594aadced39a9a8a5720f4caa5\" pid:4131 exit_status:1 exited_at:{seconds:1757377671 nanos:125087272}" Sep 9 00:27:51.556698 kubelet[2764]: I0909 00:27:51.555436 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5bdbf" podStartSLOduration=2.975246936 podStartE2EDuration="35.554936807s" podCreationTimestamp="2025-09-09 00:27:16 +0000 UTC" firstStartedPulling="2025-09-09 00:27:17.221210994 +0000 UTC m=+21.812269494" lastFinishedPulling="2025-09-09 00:27:49.800900865 +0000 UTC m=+54.391959365" observedRunningTime="2025-09-09 00:27:51.553137515 +0000 UTC m=+56.144196025" watchObservedRunningTime="2025-09-09 00:27:51.554936807 +0000 UTC m=+56.145995307" Sep 9 00:27:51.735473 kubelet[2764]: I0909 00:27:51.735398 2764 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-backend-key-pair\") pod \"ce1387d1-6053-4a63-8e97-31b286a237bd\" (UID: \"ce1387d1-6053-4a63-8e97-31b286a237bd\") " Sep 9 00:27:51.735473 kubelet[2764]: I0909 00:27:51.735450 2764 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9478v\" (UniqueName: \"kubernetes.io/projected/ce1387d1-6053-4a63-8e97-31b286a237bd-kube-api-access-9478v\") pod \"ce1387d1-6053-4a63-8e97-31b286a237bd\" (UID: \"ce1387d1-6053-4a63-8e97-31b286a237bd\") " Sep 9 00:27:51.735473 kubelet[2764]: I0909 00:27:51.735472 2764 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-ca-bundle\") pod \"ce1387d1-6053-4a63-8e97-31b286a237bd\" (UID: \"ce1387d1-6053-4a63-8e97-31b286a237bd\") " Sep 9 00:27:51.736812 kubelet[2764]: I0909 00:27:51.736767 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce1387d1-6053-4a63-8e97-31b286a237bd" (UID: "ce1387d1-6053-4a63-8e97-31b286a237bd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:27:51.741433 kubelet[2764]: I0909 00:27:51.741371 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce1387d1-6053-4a63-8e97-31b286a237bd" (UID: "ce1387d1-6053-4a63-8e97-31b286a237bd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:27:51.741433 kubelet[2764]: I0909 00:27:51.741416 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1387d1-6053-4a63-8e97-31b286a237bd-kube-api-access-9478v" (OuterVolumeSpecName: "kube-api-access-9478v") pod "ce1387d1-6053-4a63-8e97-31b286a237bd" (UID: "ce1387d1-6053-4a63-8e97-31b286a237bd"). InnerVolumeSpecName "kube-api-access-9478v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:27:51.742960 systemd[1]: var-lib-kubelet-pods-ce1387d1\x2d6053\x2d4a63\x2d8e97\x2d31b286a237bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9478v.mount: Deactivated successfully. Sep 9 00:27:51.743112 systemd[1]: var-lib-kubelet-pods-ce1387d1\x2d6053\x2d4a63\x2d8e97\x2d31b286a237bd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:27:51.836080 kubelet[2764]: I0909 00:27:51.835893 2764 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9478v\" (UniqueName: \"kubernetes.io/projected/ce1387d1-6053-4a63-8e97-31b286a237bd-kube-api-access-9478v\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:51.836080 kubelet[2764]: I0909 00:27:51.835944 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:51.836080 kubelet[2764]: I0909 00:27:51.835956 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce1387d1-6053-4a63-8e97-31b286a237bd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:27:51.979209 systemd[1]: Removed slice kubepods-besteffort-podce1387d1_6053_4a63_8e97_31b286a237bd.slice - libcontainer container kubepods-besteffort-podce1387d1_6053_4a63_8e97_31b286a237bd.slice. Sep 9 00:27:52.085537 containerd[1569]: time="2025-09-09T00:27:52.085080262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3\" id:\"5f2804bb77e28116e08c6429f8cfae7dacbbcb15a082a80488b6ed8e3cb5d8c1\" pid:4165 exit_status:1 exited_at:{seconds:1757377672 nanos:83693032}" Sep 9 00:27:53.006589 systemd[1]: Created slice kubepods-besteffort-podf5fa6469_7a44_4ac3_83f6_d16df4a39e39.slice - libcontainer container kubepods-besteffort-podf5fa6469_7a44_4ac3_83f6_d16df4a39e39.slice. Sep 9 00:27:53.144046 kubelet[2764]: I0909 00:27:53.143972 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5fa6469-7a44-4ac3-83f6-d16df4a39e39-whisker-ca-bundle\") pod \"whisker-67775dcb87-njkg7\" (UID: \"f5fa6469-7a44-4ac3-83f6-d16df4a39e39\") " pod="calico-system/whisker-67775dcb87-njkg7" Sep 9 00:27:53.144046 kubelet[2764]: I0909 00:27:53.144022 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5fa6469-7a44-4ac3-83f6-d16df4a39e39-whisker-backend-key-pair\") pod \"whisker-67775dcb87-njkg7\" (UID: \"f5fa6469-7a44-4ac3-83f6-d16df4a39e39\") " pod="calico-system/whisker-67775dcb87-njkg7" Sep 9 00:27:53.144046 kubelet[2764]: I0909 00:27:53.144050 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj9g\" (UniqueName: \"kubernetes.io/projected/f5fa6469-7a44-4ac3-83f6-d16df4a39e39-kube-api-access-dsj9g\") pod \"whisker-67775dcb87-njkg7\" (UID: \"f5fa6469-7a44-4ac3-83f6-d16df4a39e39\") " pod="calico-system/whisker-67775dcb87-njkg7" Sep 9 00:27:53.533039 kubelet[2764]: I0909 00:27:53.532959 2764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1387d1-6053-4a63-8e97-31b286a237bd" path="/var/lib/kubelet/pods/ce1387d1-6053-4a63-8e97-31b286a237bd/volumes" Sep 9 00:27:53.610278 containerd[1569]: time="2025-09-09T00:27:53.610205520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67775dcb87-njkg7,Uid:f5fa6469-7a44-4ac3-83f6-d16df4a39e39,Namespace:calico-system,Attempt:0,}" Sep 9 00:27:54.152791 systemd-networkd[1470]: vxlan.calico: Link UP Sep 9 00:27:54.152802 systemd-networkd[1470]: vxlan.calico: Gained carrier Sep 9 00:27:55.854709 systemd-networkd[1470]: vxlan.calico: Gained IPv6LL Sep 9 00:27:56.630545 systemd-networkd[1470]: calica90f9d6b5e: Link UP Sep 9 00:27:56.630828 systemd-networkd[1470]: calica90f9d6b5e: Gained carrier Sep 9 00:27:56.856813 containerd[1569]: 2025-09-09 00:27:54.546 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--67775dcb87--njkg7-eth0 whisker-67775dcb87- calico-system f5fa6469-7a44-4ac3-83f6-d16df4a39e39 1011 0 2025-09-09 00:27:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67775dcb87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-67775dcb87-njkg7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calica90f9d6b5e [] [] }} ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-" Sep 9 00:27:56.856813 containerd[1569]: 2025-09-09 00:27:54.546 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-eth0" Sep 9 00:27:56.856813 containerd[1569]: 2025-09-09 00:27:56.339 [INFO][4406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" HandleID="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Workload="localhost-k8s-whisker--67775dcb87--njkg7-eth0" Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.344 [INFO][4406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" HandleID="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Workload="localhost-k8s-whisker--67775dcb87--njkg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-67775dcb87-njkg7", "timestamp":"2025-09-09 00:27:56.339884684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.344 [INFO][4406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.344 [INFO][4406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.344 [INFO][4406] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.361 [INFO][4406] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" host="localhost" Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.366 [INFO][4406] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.370 [INFO][4406] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.371 [INFO][4406] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.373 [INFO][4406] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:27:56.857402 containerd[1569]: 2025-09-09 00:27:56.373 [INFO][4406] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" host="localhost" Sep 9 00:27:56.857752 containerd[1569]: 2025-09-09 00:27:56.374 [INFO][4406] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02 Sep 9 00:27:56.857752 containerd[1569]: 2025-09-09 00:27:56.412 [INFO][4406] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" host="localhost" Sep 9 00:27:56.857752 containerd[1569]: 2025-09-09 00:27:56.561 [INFO][4406] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" host="localhost" Sep 9 00:27:56.857752 containerd[1569]: 2025-09-09 00:27:56.561 [INFO][4406] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" host="localhost" Sep 9 00:27:56.857752 containerd[1569]: 2025-09-09 00:27:56.561 [INFO][4406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:27:56.857752 containerd[1569]: 2025-09-09 00:27:56.561 [INFO][4406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" HandleID="k8s-pod-network.6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Workload="localhost-k8s-whisker--67775dcb87--njkg7-eth0" Sep 9 00:27:56.857926 containerd[1569]: 2025-09-09 00:27:56.564 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67775dcb87--njkg7-eth0", GenerateName:"whisker-67775dcb87-", Namespace:"calico-system", SelfLink:"", UID:"f5fa6469-7a44-4ac3-83f6-d16df4a39e39", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67775dcb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-67775dcb87-njkg7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica90f9d6b5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:27:56.857926 containerd[1569]: 2025-09-09 00:27:56.565 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-eth0" Sep 9 00:27:56.858034 containerd[1569]: 2025-09-09 00:27:56.565 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica90f9d6b5e ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-eth0" Sep 9 00:27:56.858034 containerd[1569]: 2025-09-09 00:27:56.629 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-eth0" Sep 9 00:27:56.858313 containerd[1569]: 2025-09-09 00:27:56.629 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--67775dcb87--njkg7-eth0", GenerateName:"whisker-67775dcb87-", Namespace:"calico-system", SelfLink:"", UID:"f5fa6469-7a44-4ac3-83f6-d16df4a39e39", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67775dcb87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02", Pod:"whisker-67775dcb87-njkg7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica90f9d6b5e", MAC:"d2:97:bd:a0:94:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:27:56.858396 containerd[1569]: 2025-09-09 00:27:56.851 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" Namespace="calico-system" Pod="whisker-67775dcb87-njkg7" WorkloadEndpoint="localhost-k8s-whisker--67775dcb87--njkg7-eth0" Sep 9 00:27:57.164252 containerd[1569]: time="2025-09-09T00:27:57.164160926Z" level=info msg="connecting to shim 6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02" address="unix:///run/containerd/s/758efc2a909ac4371bc7968547616853a435ecbc31cd5f71c2cac86e7a6738a0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:27:57.200290 systemd[1]: Started cri-containerd-6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02.scope - libcontainer container 6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02. Sep 9 00:27:57.223739 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:27:57.618589 containerd[1569]: time="2025-09-09T00:27:57.618495025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67775dcb87-njkg7,Uid:f5fa6469-7a44-4ac3-83f6-d16df4a39e39,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02\"" Sep 9 00:27:57.620210 containerd[1569]: time="2025-09-09T00:27:57.620177011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:27:58.671624 systemd-networkd[1470]: calica90f9d6b5e: Gained IPv6LL Sep 9 00:27:59.975663 containerd[1569]: time="2025-09-09T00:27:59.975583156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:59.977465 containerd[1569]: time="2025-09-09T00:27:59.977355069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:27:59.979983 containerd[1569]: time="2025-09-09T00:27:59.979900411Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:59.984603 containerd[1569]: time="2025-09-09T00:27:59.984543089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:27:59.985457 containerd[1569]: time="2025-09-09T00:27:59.985421646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.365207774s" Sep 9 00:27:59.985554 containerd[1569]: time="2025-09-09T00:27:59.985460260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:27:59.991989 containerd[1569]: time="2025-09-09T00:27:59.991304462Z" level=info msg="CreateContainer within sandbox \"6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:28:00.005919 containerd[1569]: time="2025-09-09T00:28:00.005846451Z" level=info msg="Container 3bbd405b3e4baf629d67078fa8f85a449e5090cbf949b6416a574846b192094e: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:00.018536 containerd[1569]: time="2025-09-09T00:28:00.018455421Z" level=info msg="CreateContainer within sandbox \"6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3bbd405b3e4baf629d67078fa8f85a449e5090cbf949b6416a574846b192094e\"" Sep 9 00:28:00.019187 containerd[1569]: time="2025-09-09T00:28:00.019158664Z" level=info msg="StartContainer for \"3bbd405b3e4baf629d67078fa8f85a449e5090cbf949b6416a574846b192094e\"" Sep 9 00:28:00.020621 containerd[1569]: time="2025-09-09T00:28:00.020490616Z" level=info msg="connecting to shim 3bbd405b3e4baf629d67078fa8f85a449e5090cbf949b6416a574846b192094e" address="unix:///run/containerd/s/758efc2a909ac4371bc7968547616853a435ecbc31cd5f71c2cac86e7a6738a0" protocol=ttrpc version=3 Sep 9 00:28:00.055735 systemd[1]: Started cri-containerd-3bbd405b3e4baf629d67078fa8f85a449e5090cbf949b6416a574846b192094e.scope - libcontainer container 3bbd405b3e4baf629d67078fa8f85a449e5090cbf949b6416a574846b192094e. Sep 9 00:28:00.115703 containerd[1569]: time="2025-09-09T00:28:00.115645024Z" level=info msg="StartContainer for \"3bbd405b3e4baf629d67078fa8f85a449e5090cbf949b6416a574846b192094e\" returns successfully" Sep 9 00:28:00.118609 containerd[1569]: time="2025-09-09T00:28:00.118578184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:28:00.526886 containerd[1569]: time="2025-09-09T00:28:00.526836389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-p9srl,Uid:afcc42bb-db92-4dd9-85b7-d4cf431a9e03,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:28:00.640068 systemd[1]: Started sshd@7-10.0.0.40:22-10.0.0.1:33578.service - OpenSSH per-connection server daemon (10.0.0.1:33578). Sep 9 00:28:00.758682 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:00.761757 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:00.772945 systemd-logind[1517]: New session 8 of user core. Sep 9 00:28:00.777785 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:28:00.782998 systemd-networkd[1470]: cali357dc70381b: Link UP Sep 9 00:28:00.784923 systemd-networkd[1470]: cali357dc70381b: Gained carrier Sep 9 00:28:00.806210 containerd[1569]: 2025-09-09 00:28:00.651 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0 calico-apiserver-ddcccdf47- calico-apiserver afcc42bb-db92-4dd9-85b7-d4cf431a9e03 903 0 2025-09-09 00:27:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ddcccdf47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ddcccdf47-p9srl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali357dc70381b [] [] }} ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-" Sep 9 00:28:00.806210 containerd[1569]: 2025-09-09 00:28:00.652 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" Sep 9 00:28:00.806210 containerd[1569]: 2025-09-09 00:28:00.725 [INFO][4542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" HandleID="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Workload="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.725 [INFO][4542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" HandleID="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Workload="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ddcccdf47-p9srl", "timestamp":"2025-09-09 00:28:00.725383631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.725 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.725 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.725 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.734 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" host="localhost" Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.744 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.750 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.752 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.755 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:00.806461 containerd[1569]: 2025-09-09 00:28:00.755 [INFO][4542] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" host="localhost" Sep 9 00:28:00.806805 containerd[1569]: 2025-09-09 00:28:00.756 [INFO][4542] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081 Sep 9 00:28:00.806805 containerd[1569]: 2025-09-09 00:28:00.763 [INFO][4542] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" host="localhost" Sep 9 00:28:00.806805 containerd[1569]: 2025-09-09 00:28:00.770 [INFO][4542] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" host="localhost" Sep 9 00:28:00.806805 containerd[1569]: 2025-09-09 00:28:00.771 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" host="localhost" Sep 9 00:28:00.806805 containerd[1569]: 2025-09-09 00:28:00.771 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:28:00.806805 containerd[1569]: 2025-09-09 00:28:00.771 [INFO][4542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" HandleID="k8s-pod-network.d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Workload="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" Sep 9 00:28:00.806995 containerd[1569]: 2025-09-09 00:28:00.776 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0", GenerateName:"calico-apiserver-ddcccdf47-", Namespace:"calico-apiserver", SelfLink:"", UID:"afcc42bb-db92-4dd9-85b7-d4cf431a9e03", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddcccdf47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ddcccdf47-p9srl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali357dc70381b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:00.807052 containerd[1569]: 2025-09-09 00:28:00.776 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" Sep 9 00:28:00.807052 containerd[1569]: 2025-09-09 00:28:00.776 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali357dc70381b ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" Sep 9 00:28:00.807052 containerd[1569]: 2025-09-09 00:28:00.787 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" Sep 9 00:28:00.807144 containerd[1569]: 2025-09-09 00:28:00.787 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0", GenerateName:"calico-apiserver-ddcccdf47-", Namespace:"calico-apiserver", SelfLink:"", UID:"afcc42bb-db92-4dd9-85b7-d4cf431a9e03", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddcccdf47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081", Pod:"calico-apiserver-ddcccdf47-p9srl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali357dc70381b", MAC:"ce:8b:63:8d:58:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:00.807237 containerd[1569]: 2025-09-09 00:28:00.800 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-p9srl" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--p9srl-eth0" Sep 9 00:28:00.840953 containerd[1569]: time="2025-09-09T00:28:00.840844163Z" level=info msg="connecting to shim d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081" address="unix:///run/containerd/s/93e266e45e9a77a43dc249034477d9c374918e08b27407303c5e78e8641a2ad0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:00.872871 systemd[1]: Started cri-containerd-d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081.scope - libcontainer container d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081. Sep 9 00:28:00.895376 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:28:01.001994 containerd[1569]: time="2025-09-09T00:28:01.001949710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-p9srl,Uid:afcc42bb-db92-4dd9-85b7-d4cf431a9e03,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081\"" Sep 9 00:28:01.079577 sshd[4553]: Connection closed by 10.0.0.1 port 33578 Sep 9 00:28:01.080008 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:01.085226 systemd[1]: sshd@7-10.0.0.40:22-10.0.0.1:33578.service: Deactivated successfully. Sep 9 00:28:01.088116 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:28:01.089071 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:28:01.090656 systemd-logind[1517]: Removed session 8. Sep 9 00:28:01.526058 kubelet[2764]: E0909 00:28:01.525907 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:01.527004 containerd[1569]: time="2025-09-09T00:28:01.526360285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-j5f98,Uid:0af48b9c-f66e-4da9-994e-e74d6dd7e90d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:28:01.527004 containerd[1569]: time="2025-09-09T00:28:01.526694372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzvbd,Uid:734cf28b-2429-47be-8f5f-838bba2bec22,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:01.846232 systemd-networkd[1470]: cali1d7cb44b56f: Link UP Sep 9 00:28:01.846618 systemd-networkd[1470]: cali1d7cb44b56f: Gained carrier Sep 9 00:28:01.864599 containerd[1569]: 2025-09-09 00:28:01.741 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0 coredns-674b8bbfcf- kube-system 734cf28b-2429-47be-8f5f-838bba2bec22 910 0 2025-09-09 00:27:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fzvbd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d7cb44b56f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-" Sep 9 00:28:01.864599 containerd[1569]: 2025-09-09 00:28:01.742 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" Sep 9 00:28:01.864599 containerd[1569]: 2025-09-09 00:28:01.778 [INFO][4654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" HandleID="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Workload="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.778 [INFO][4654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" HandleID="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Workload="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fzvbd", "timestamp":"2025-09-09 00:28:01.778142123 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.778 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.778 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.778 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.787 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" host="localhost" Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.792 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.797 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.800 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.802 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:01.864941 containerd[1569]: 2025-09-09 00:28:01.802 [INFO][4654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" host="localhost" Sep 9 00:28:01.865273 containerd[1569]: 2025-09-09 00:28:01.804 [INFO][4654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca Sep 9 00:28:01.865273 containerd[1569]: 2025-09-09 00:28:01.826 [INFO][4654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" host="localhost" Sep 9 00:28:01.865273 containerd[1569]: 2025-09-09 00:28:01.837 [INFO][4654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" host="localhost" Sep 9 00:28:01.865273 containerd[1569]: 2025-09-09 00:28:01.838 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" host="localhost" Sep 9 00:28:01.865273 containerd[1569]: 2025-09-09 00:28:01.838 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:28:01.865273 containerd[1569]: 2025-09-09 00:28:01.838 [INFO][4654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" HandleID="k8s-pod-network.2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Workload="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" Sep 9 00:28:01.865445 containerd[1569]: 2025-09-09 00:28:01.841 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"734cf28b-2429-47be-8f5f-838bba2bec22", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fzvbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d7cb44b56f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:01.865598 containerd[1569]: 2025-09-09 00:28:01.841 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" Sep 9 00:28:01.865598 containerd[1569]: 2025-09-09 00:28:01.841 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d7cb44b56f ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" Sep 9 00:28:01.865598 containerd[1569]: 2025-09-09 00:28:01.846 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" Sep 9 00:28:01.865717 containerd[1569]: 2025-09-09 00:28:01.847 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"734cf28b-2429-47be-8f5f-838bba2bec22", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca", Pod:"coredns-674b8bbfcf-fzvbd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d7cb44b56f", MAC:"be:fb:d2:47:8c:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:01.865717 containerd[1569]: 2025-09-09 00:28:01.859 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzvbd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fzvbd-eth0" Sep 9 00:28:01.870788 systemd-networkd[1470]: cali357dc70381b: Gained IPv6LL Sep 9 00:28:02.055192 systemd-networkd[1470]: calie67652a4215: Link UP Sep 9 00:28:02.055912 systemd-networkd[1470]: calie67652a4215: Gained carrier Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.741 [INFO][4622] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0 calico-apiserver-ddcccdf47- calico-apiserver 0af48b9c-f66e-4da9-994e-e74d6dd7e90d 900 0 2025-09-09 00:27:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ddcccdf47 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ddcccdf47-j5f98 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie67652a4215 [] [] }} ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.742 [INFO][4622] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.783 [INFO][4653] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" HandleID="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Workload="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.783 [INFO][4653] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" HandleID="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Workload="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ddcccdf47-j5f98", "timestamp":"2025-09-09 00:28:01.783385518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.783 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.838 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.838 [INFO][4653] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.889 [INFO][4653] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.895 [INFO][4653] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.899 [INFO][4653] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.901 [INFO][4653] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.903 [INFO][4653] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.903 [INFO][4653] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:01.904 [INFO][4653] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034 Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:02.021 [INFO][4653] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:02.048 [INFO][4653] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:02.048 [INFO][4653] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" host="localhost" Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:02.049 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:28:02.088377 containerd[1569]: 2025-09-09 00:28:02.049 [INFO][4653] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" HandleID="k8s-pod-network.a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Workload="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" Sep 9 00:28:02.089829 containerd[1569]: 2025-09-09 00:28:02.052 [INFO][4622] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0", GenerateName:"calico-apiserver-ddcccdf47-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af48b9c-f66e-4da9-994e-e74d6dd7e90d", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddcccdf47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ddcccdf47-j5f98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie67652a4215", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:02.089829 containerd[1569]: 2025-09-09 00:28:02.052 [INFO][4622] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" Sep 9 00:28:02.089829 containerd[1569]: 2025-09-09 00:28:02.052 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie67652a4215 ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" Sep 9 00:28:02.089829 containerd[1569]: 2025-09-09 00:28:02.055 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" Sep 9 00:28:02.089829 containerd[1569]: 2025-09-09 00:28:02.056 [INFO][4622] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0", GenerateName:"calico-apiserver-ddcccdf47-", Namespace:"calico-apiserver", SelfLink:"", UID:"0af48b9c-f66e-4da9-994e-e74d6dd7e90d", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ddcccdf47", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034", Pod:"calico-apiserver-ddcccdf47-j5f98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie67652a4215", MAC:"5a:71:9d:5a:54:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:02.089829 containerd[1569]: 2025-09-09 00:28:02.084 [INFO][4622] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" Namespace="calico-apiserver" Pod="calico-apiserver-ddcccdf47-j5f98" WorkloadEndpoint="localhost-k8s-calico--apiserver--ddcccdf47--j5f98-eth0" Sep 9 00:28:02.362632 containerd[1569]: time="2025-09-09T00:28:02.362560297Z" level=info msg="connecting to shim 2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca" address="unix:///run/containerd/s/a31af955470965c118b847744993fbfcb76b5bfab4f9e7215dde6922797284a1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:02.395698 systemd[1]: Started cri-containerd-2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca.scope - libcontainer container 2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca. Sep 9 00:28:02.409875 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:28:02.525132 containerd[1569]: time="2025-09-09T00:28:02.525015975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzvbd,Uid:734cf28b-2429-47be-8f5f-838bba2bec22,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca\"" Sep 9 00:28:02.526278 kubelet[2764]: E0909 00:28:02.526230 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:02.527532 containerd[1569]: time="2025-09-09T00:28:02.527320561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947cbcf4b-vr8w7,Uid:9c06e30f-3ca8-4205-96e1-882cd61294b1,Namespace:calico-system,Attempt:0,}" Sep 9 00:28:02.538874 containerd[1569]: time="2025-09-09T00:28:02.538604653Z" level=info msg="CreateContainer within sandbox \"2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:28:02.581397 containerd[1569]: time="2025-09-09T00:28:02.581330021Z" level=info msg="connecting to shim a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034" address="unix:///run/containerd/s/8998445561dac2e34c3a9e689778f13efbe20f3a966cc7cbe348c9d9a5b56057" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:02.589237 containerd[1569]: time="2025-09-09T00:28:02.589156359Z" level=info msg="Container dddac9ad26c31fed5f8440df672e594db22e9c2679cb5e112c12bc560fa1242b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:02.614944 systemd[1]: Started cri-containerd-a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034.scope - libcontainer container a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034. Sep 9 00:28:02.616460 containerd[1569]: time="2025-09-09T00:28:02.615623418Z" level=info msg="CreateContainer within sandbox \"2e0c05da03efa4fb051e05b93e6b4ed6f0d6b2a951c3e4cf46c61382ab383aca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dddac9ad26c31fed5f8440df672e594db22e9c2679cb5e112c12bc560fa1242b\"" Sep 9 00:28:02.618539 containerd[1569]: time="2025-09-09T00:28:02.617777667Z" level=info msg="StartContainer for \"dddac9ad26c31fed5f8440df672e594db22e9c2679cb5e112c12bc560fa1242b\"" Sep 9 00:28:02.621588 containerd[1569]: time="2025-09-09T00:28:02.621480921Z" level=info msg="connecting to shim dddac9ad26c31fed5f8440df672e594db22e9c2679cb5e112c12bc560fa1242b" address="unix:///run/containerd/s/a31af955470965c118b847744993fbfcb76b5bfab4f9e7215dde6922797284a1" protocol=ttrpc version=3 Sep 9 00:28:02.642050 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:28:02.646841 systemd[1]: Started cri-containerd-dddac9ad26c31fed5f8440df672e594db22e9c2679cb5e112c12bc560fa1242b.scope - libcontainer container dddac9ad26c31fed5f8440df672e594db22e9c2679cb5e112c12bc560fa1242b. Sep 9 00:28:02.705132 containerd[1569]: time="2025-09-09T00:28:02.705060410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ddcccdf47-j5f98,Uid:0af48b9c-f66e-4da9-994e-e74d6dd7e90d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034\"" Sep 9 00:28:02.775003 containerd[1569]: time="2025-09-09T00:28:02.774951742Z" level=info msg="StartContainer for \"dddac9ad26c31fed5f8440df672e594db22e9c2679cb5e112c12bc560fa1242b\" returns successfully" Sep 9 00:28:02.842749 systemd-networkd[1470]: calib589851fa36: Link UP Sep 9 00:28:02.843144 systemd-networkd[1470]: calib589851fa36: Gained carrier Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.633 [INFO][4736] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0 calico-kube-controllers-7947cbcf4b- calico-system 9c06e30f-3ca8-4205-96e1-882cd61294b1 895 0 2025-09-09 00:27:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7947cbcf4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7947cbcf4b-vr8w7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib589851fa36 [] [] }} ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.633 [INFO][4736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.771 [INFO][4802] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" HandleID="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Workload="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.771 [INFO][4802] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" HandleID="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Workload="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3d40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7947cbcf4b-vr8w7", "timestamp":"2025-09-09 00:28:02.771146795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.771 [INFO][4802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.771 [INFO][4802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.772 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.784 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.791 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.798 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.800 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.804 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.805 [INFO][4802] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.807 [INFO][4802] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9 Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.816 [INFO][4802] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.830 [INFO][4802] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.830 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" host="localhost" Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.830 [INFO][4802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:28:02.865433 containerd[1569]: 2025-09-09 00:28:02.830 [INFO][4802] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" HandleID="k8s-pod-network.d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Workload="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" Sep 9 00:28:02.866398 containerd[1569]: 2025-09-09 00:28:02.837 [INFO][4736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0", GenerateName:"calico-kube-controllers-7947cbcf4b-", Namespace:"calico-system", SelfLink:"", UID:"9c06e30f-3ca8-4205-96e1-882cd61294b1", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7947cbcf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7947cbcf4b-vr8w7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib589851fa36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:02.866398 containerd[1569]: 2025-09-09 00:28:02.837 [INFO][4736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" Sep 9 00:28:02.866398 containerd[1569]: 2025-09-09 00:28:02.837 [INFO][4736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib589851fa36 ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" Sep 9 00:28:02.866398 containerd[1569]: 2025-09-09 00:28:02.844 [INFO][4736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" Sep 9 00:28:02.866398 containerd[1569]: 2025-09-09 00:28:02.845 [INFO][4736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0", GenerateName:"calico-kube-controllers-7947cbcf4b-", Namespace:"calico-system", SelfLink:"", UID:"9c06e30f-3ca8-4205-96e1-882cd61294b1", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7947cbcf4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9", Pod:"calico-kube-controllers-7947cbcf4b-vr8w7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib589851fa36", MAC:"fe:52:13:a4:31:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:02.866398 containerd[1569]: 2025-09-09 00:28:02.860 [INFO][4736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" Namespace="calico-system" Pod="calico-kube-controllers-7947cbcf4b-vr8w7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7947cbcf4b--vr8w7-eth0" Sep 9 00:28:02.936097 containerd[1569]: time="2025-09-09T00:28:02.936044590Z" level=info msg="connecting to shim d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9" address="unix:///run/containerd/s/e09f31d6966a43d8d22e71e942529191a8cc6caae31acb8d0a71ac7134f65691" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:02.972801 systemd[1]: Started cri-containerd-d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9.scope - libcontainer container d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9. Sep 9 00:28:02.993237 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:28:03.001667 kubelet[2764]: E0909 00:28:03.001630 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:03.160598 containerd[1569]: time="2025-09-09T00:28:03.151800855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7947cbcf4b-vr8w7,Uid:9c06e30f-3ca8-4205-96e1-882cd61294b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9\"" Sep 9 00:28:03.366826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1770103494.mount: Deactivated successfully. Sep 9 00:28:03.532187 containerd[1569]: time="2025-09-09T00:28:03.531903318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-q52gn,Uid:4429621a-a5cd-4e34-a55b-31610e55d85d,Namespace:calico-system,Attempt:0,}" Sep 9 00:28:03.533113 containerd[1569]: time="2025-09-09T00:28:03.533062207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4gtb,Uid:0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf,Namespace:calico-system,Attempt:0,}" Sep 9 00:28:03.638434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227102239.mount: Deactivated successfully. Sep 9 00:28:03.770976 containerd[1569]: time="2025-09-09T00:28:03.770906450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:03.773533 containerd[1569]: time="2025-09-09T00:28:03.773056047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:28:03.775651 containerd[1569]: time="2025-09-09T00:28:03.775599196Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:03.778874 containerd[1569]: time="2025-09-09T00:28:03.778832531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:03.779655 containerd[1569]: time="2025-09-09T00:28:03.779625713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.660971623s" Sep 9 00:28:03.779837 containerd[1569]: time="2025-09-09T00:28:03.779815515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:28:03.782997 containerd[1569]: time="2025-09-09T00:28:03.782624109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:28:03.788386 containerd[1569]: time="2025-09-09T00:28:03.788293768Z" level=info msg="CreateContainer within sandbox \"6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:28:03.803950 containerd[1569]: time="2025-09-09T00:28:03.803902265Z" level=info msg="Container d56cdcb75d55bf354c6f0cbfe0cc5ed1c8c99597890d5fba0ffb0f004b2f4e79: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:03.844383 containerd[1569]: time="2025-09-09T00:28:03.844328866Z" level=info msg="CreateContainer within sandbox \"6fcc843d2b83a85c9b758cafbfd0c4eec8b260782fdb6f85100d26b0f6916d02\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d56cdcb75d55bf354c6f0cbfe0cc5ed1c8c99597890d5fba0ffb0f004b2f4e79\"" Sep 9 00:28:03.846950 containerd[1569]: time="2025-09-09T00:28:03.845387986Z" level=info msg="StartContainer for \"d56cdcb75d55bf354c6f0cbfe0cc5ed1c8c99597890d5fba0ffb0f004b2f4e79\"" Sep 9 00:28:03.846950 containerd[1569]: time="2025-09-09T00:28:03.846591220Z" level=info msg="connecting to shim d56cdcb75d55bf354c6f0cbfe0cc5ed1c8c99597890d5fba0ffb0f004b2f4e79" address="unix:///run/containerd/s/758efc2a909ac4371bc7968547616853a435ecbc31cd5f71c2cac86e7a6738a0" protocol=ttrpc version=3 Sep 9 00:28:03.854743 systemd-networkd[1470]: cali1d7cb44b56f: Gained IPv6LL Sep 9 00:28:03.877850 systemd[1]: Started cri-containerd-d56cdcb75d55bf354c6f0cbfe0cc5ed1c8c99597890d5fba0ffb0f004b2f4e79.scope - libcontainer container d56cdcb75d55bf354c6f0cbfe0cc5ed1c8c99597890d5fba0ffb0f004b2f4e79. Sep 9 00:28:03.917101 systemd-networkd[1470]: cali925e260f81d: Link UP Sep 9 00:28:03.917604 systemd-networkd[1470]: cali925e260f81d: Gained carrier Sep 9 00:28:03.918671 systemd-networkd[1470]: calie67652a4215: Gained IPv6LL Sep 9 00:28:03.944876 kubelet[2764]: I0909 00:28:03.944791 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fzvbd" podStartSLOduration=63.944763111 podStartE2EDuration="1m3.944763111s" podCreationTimestamp="2025-09-09 00:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:28:03.117795736 +0000 UTC m=+67.708854236" watchObservedRunningTime="2025-09-09 00:28:03.944763111 +0000 UTC m=+68.535821611" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.795 [INFO][4899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--q52gn-eth0 goldmane-54d579b49d- calico-system 4429621a-a5cd-4e34-a55b-31610e55d85d 904 0 2025-09-09 00:27:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-q52gn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali925e260f81d [] [] }} ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.796 [INFO][4899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.833 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" HandleID="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Workload="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.834 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" HandleID="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Workload="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-q52gn", "timestamp":"2025-09-09 00:28:03.833936314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.834 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.834 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.834 [INFO][4940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.843 [INFO][4940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.856 [INFO][4940] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.862 [INFO][4940] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.864 [INFO][4940] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.869 [INFO][4940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.869 [INFO][4940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.871 [INFO][4940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.880 [INFO][4940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.908 [INFO][4940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.908 [INFO][4940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" host="localhost" Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.908 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:28:03.953388 containerd[1569]: 2025-09-09 00:28:03.908 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" HandleID="k8s-pod-network.ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Workload="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" Sep 9 00:28:03.954434 containerd[1569]: 2025-09-09 00:28:03.913 [INFO][4899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--q52gn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4429621a-a5cd-4e34-a55b-31610e55d85d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-q52gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali925e260f81d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:03.954434 containerd[1569]: 2025-09-09 00:28:03.913 [INFO][4899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" Sep 9 00:28:03.954434 containerd[1569]: 2025-09-09 00:28:03.913 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali925e260f81d ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" Sep 9 00:28:03.954434 containerd[1569]: 2025-09-09 00:28:03.917 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" Sep 9 00:28:03.954434 containerd[1569]: 2025-09-09 00:28:03.918 [INFO][4899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--q52gn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4429621a-a5cd-4e34-a55b-31610e55d85d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d", Pod:"goldmane-54d579b49d-q52gn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali925e260f81d", MAC:"8e:66:af:4f:f1:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:03.954434 containerd[1569]: 2025-09-09 00:28:03.948 [INFO][4899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" Namespace="calico-system" Pod="goldmane-54d579b49d-q52gn" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--q52gn-eth0" Sep 9 00:28:03.963998 containerd[1569]: time="2025-09-09T00:28:03.963910705Z" level=info msg="StartContainer for \"d56cdcb75d55bf354c6f0cbfe0cc5ed1c8c99597890d5fba0ffb0f004b2f4e79\" returns successfully" Sep 9 00:28:03.999797 containerd[1569]: time="2025-09-09T00:28:03.999740494Z" level=info msg="connecting to shim ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d" address="unix:///run/containerd/s/fce264a86fe639f87c4ac65bcf53547bec822e1a6649de5081e13e1f454bb717" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:04.013584 kubelet[2764]: E0909 00:28:04.013542 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:04.021128 systemd-networkd[1470]: cali27f47a0a068: Link UP Sep 9 00:28:04.022384 systemd-networkd[1470]: cali27f47a0a068: Gained carrier Sep 9 00:28:04.044324 kubelet[2764]: I0909 00:28:04.042721 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-67775dcb87-njkg7" podStartSLOduration=5.881083988 podStartE2EDuration="12.042702187s" podCreationTimestamp="2025-09-09 00:27:52 +0000 UTC" firstStartedPulling="2025-09-09 00:27:57.61987328 +0000 UTC m=+62.210931780" lastFinishedPulling="2025-09-09 00:28:03.781491479 +0000 UTC m=+68.372549979" observedRunningTime="2025-09-09 00:28:04.040446708 +0000 UTC m=+68.631505208" watchObservedRunningTime="2025-09-09 00:28:04.042702187 +0000 UTC m=+68.633760687" Sep 9 00:28:04.044046 systemd[1]: Started cri-containerd-ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d.scope - libcontainer container ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d. Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.790 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t4gtb-eth0 csi-node-driver- calico-system 0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf 778 0 2025-09-09 00:27:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t4gtb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali27f47a0a068 [] [] }} ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.790 [INFO][4906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-eth0" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.834 [INFO][4933] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" HandleID="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Workload="localhost-k8s-csi--node--driver--t4gtb-eth0" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.834 [INFO][4933] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" HandleID="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Workload="localhost-k8s-csi--node--driver--t4gtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t4gtb", "timestamp":"2025-09-09 00:28:03.834141284 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.834 [INFO][4933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.908 [INFO][4933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.908 [INFO][4933] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.946 [INFO][4933] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.959 [INFO][4933] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.968 [INFO][4933] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.972 [INFO][4933] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.976 [INFO][4933] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.976 [INFO][4933] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.979 [INFO][4933] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.984 [INFO][4933] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.995 [INFO][4933] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.995 [INFO][4933] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" host="localhost" Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.995 [INFO][4933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:28:04.066628 containerd[1569]: 2025-09-09 00:28:03.995 [INFO][4933] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" HandleID="k8s-pod-network.28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Workload="localhost-k8s-csi--node--driver--t4gtb-eth0" Sep 9 00:28:04.067709 containerd[1569]: 2025-09-09 00:28:04.011 [INFO][4906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t4gtb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t4gtb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27f47a0a068", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:04.067709 containerd[1569]: 2025-09-09 00:28:04.011 [INFO][4906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-eth0" Sep 9 00:28:04.067709 containerd[1569]: 2025-09-09 00:28:04.011 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27f47a0a068 ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-eth0" Sep 9 00:28:04.067709 containerd[1569]: 2025-09-09 00:28:04.023 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-eth0" Sep 9 00:28:04.067709 containerd[1569]: 2025-09-09 00:28:04.023 [INFO][4906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t4gtb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c", Pod:"csi-node-driver-t4gtb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27f47a0a068", MAC:"9a:0d:60:ea:be:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:04.067709 containerd[1569]: 2025-09-09 00:28:04.049 [INFO][4906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" Namespace="calico-system" Pod="csi-node-driver-t4gtb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t4gtb-eth0" Sep 9 00:28:04.081730 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:28:04.139879 containerd[1569]: time="2025-09-09T00:28:04.139815242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-q52gn,Uid:4429621a-a5cd-4e34-a55b-31610e55d85d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d\"" Sep 9 00:28:04.151491 containerd[1569]: time="2025-09-09T00:28:04.151403276Z" level=info msg="connecting to shim 28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c" address="unix:///run/containerd/s/e46c76bbd3d2713b0aa4aeaa2faa88a77aced9e33bb993084dd0a631a6560fa0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:04.177692 systemd-networkd[1470]: calib589851fa36: Gained IPv6LL Sep 9 00:28:04.181680 systemd[1]: Started cri-containerd-28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c.scope - libcontainer container 28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c. Sep 9 00:28:04.197910 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:28:04.362955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262558508.mount: Deactivated successfully. Sep 9 00:28:04.526122 kubelet[2764]: E0909 00:28:04.526084 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:04.526286 kubelet[2764]: E0909 00:28:04.526267 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:04.526816 containerd[1569]: time="2025-09-09T00:28:04.526747862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn528,Uid:8da125d6-0b64-44f6-a7b4-cbc14725e524,Namespace:kube-system,Attempt:0,}" Sep 9 00:28:04.629238 containerd[1569]: time="2025-09-09T00:28:04.629096075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t4gtb,Uid:0a6cadb9-36e6-4cbb-bf0b-2c80c499a1bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c\"" Sep 9 00:28:04.763830 systemd-networkd[1470]: calic42c07ffbdf: Link UP Sep 9 00:28:04.764066 systemd-networkd[1470]: calic42c07ffbdf: Gained carrier Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.686 [INFO][5102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cn528-eth0 coredns-674b8bbfcf- kube-system 8da125d6-0b64-44f6-a7b4-cbc14725e524 896 0 2025-09-09 00:27:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cn528 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic42c07ffbdf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.686 [INFO][5102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.714 [INFO][5116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" HandleID="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Workload="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.714 [INFO][5116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" HandleID="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Workload="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7940), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cn528", "timestamp":"2025-09-09 00:28:04.714479198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.714 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.714 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.714 [INFO][5116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.722 [INFO][5116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.727 [INFO][5116] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.733 [INFO][5116] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.735 [INFO][5116] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.737 [INFO][5116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.737 [INFO][5116] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.739 [INFO][5116] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.743 [INFO][5116] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.756 [INFO][5116] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.756 [INFO][5116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" host="localhost" Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.756 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:28:04.784862 containerd[1569]: 2025-09-09 00:28:04.756 [INFO][5116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" HandleID="k8s-pod-network.f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Workload="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" Sep 9 00:28:04.785911 containerd[1569]: 2025-09-09 00:28:04.761 [INFO][5102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cn528-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8da125d6-0b64-44f6-a7b4-cbc14725e524", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cn528", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic42c07ffbdf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:04.785911 containerd[1569]: 2025-09-09 00:28:04.761 [INFO][5102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" Sep 9 00:28:04.785911 containerd[1569]: 2025-09-09 00:28:04.761 [INFO][5102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic42c07ffbdf ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" Sep 9 00:28:04.785911 containerd[1569]: 2025-09-09 00:28:04.764 [INFO][5102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" Sep 9 00:28:04.785911 containerd[1569]: 2025-09-09 00:28:04.765 [INFO][5102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cn528-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8da125d6-0b64-44f6-a7b4-cbc14725e524", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 27, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d", Pod:"coredns-674b8bbfcf-cn528", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic42c07ffbdf", MAC:"4e:11:03:12:fd:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:28:04.785911 containerd[1569]: 2025-09-09 00:28:04.780 [INFO][5102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn528" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cn528-eth0" Sep 9 00:28:04.820137 containerd[1569]: time="2025-09-09T00:28:04.820075511Z" level=info msg="connecting to shim f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d" address="unix:///run/containerd/s/f16d6b492744076576f04336b9e7ec6bb5c6a9b6e42f3ed4b6b2d8ce3790ce45" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:28:04.853881 systemd[1]: Started cri-containerd-f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d.scope - libcontainer container f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d. Sep 9 00:28:04.881933 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:28:04.920870 containerd[1569]: time="2025-09-09T00:28:04.920702864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn528,Uid:8da125d6-0b64-44f6-a7b4-cbc14725e524,Namespace:kube-system,Attempt:0,} returns sandbox id \"f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d\"" Sep 9 00:28:04.922227 kubelet[2764]: E0909 00:28:04.921970 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:04.935926 containerd[1569]: time="2025-09-09T00:28:04.935852414Z" level=info msg="CreateContainer within sandbox \"f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:28:04.959578 containerd[1569]: time="2025-09-09T00:28:04.959446647Z" level=info msg="Container 790a7b369ec342d5ec523c23e7a8ee1698c4b8e00920769d488e6214dc0b04e1: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:04.969545 containerd[1569]: time="2025-09-09T00:28:04.969440512Z" level=info msg="CreateContainer within sandbox \"f46328bb9185a44cd5a608325cc999d128c2444aac4406d1219b91268975553d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"790a7b369ec342d5ec523c23e7a8ee1698c4b8e00920769d488e6214dc0b04e1\"" Sep 9 00:28:04.970481 containerd[1569]: time="2025-09-09T00:28:04.970405912Z" level=info msg="StartContainer for \"790a7b369ec342d5ec523c23e7a8ee1698c4b8e00920769d488e6214dc0b04e1\"" Sep 9 00:28:04.972807 containerd[1569]: time="2025-09-09T00:28:04.972766280Z" level=info msg="connecting to shim 790a7b369ec342d5ec523c23e7a8ee1698c4b8e00920769d488e6214dc0b04e1" address="unix:///run/containerd/s/f16d6b492744076576f04336b9e7ec6bb5c6a9b6e42f3ed4b6b2d8ce3790ce45" protocol=ttrpc version=3 Sep 9 00:28:05.002839 systemd[1]: Started cri-containerd-790a7b369ec342d5ec523c23e7a8ee1698c4b8e00920769d488e6214dc0b04e1.scope - libcontainer container 790a7b369ec342d5ec523c23e7a8ee1698c4b8e00920769d488e6214dc0b04e1. Sep 9 00:28:05.023939 kubelet[2764]: E0909 00:28:05.023879 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:05.070756 systemd-networkd[1470]: cali925e260f81d: Gained IPv6LL Sep 9 00:28:05.314748 containerd[1569]: time="2025-09-09T00:28:05.314698434Z" level=info msg="StartContainer for \"790a7b369ec342d5ec523c23e7a8ee1698c4b8e00920769d488e6214dc0b04e1\" returns successfully" Sep 9 00:28:05.582689 systemd-networkd[1470]: cali27f47a0a068: Gained IPv6LL Sep 9 00:28:05.903886 systemd-networkd[1470]: calic42c07ffbdf: Gained IPv6LL Sep 9 00:28:06.026657 kubelet[2764]: E0909 00:28:06.026606 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:06.092100 systemd[1]: Started sshd@8-10.0.0.40:22-10.0.0.1:33594.service - OpenSSH per-connection server daemon (10.0.0.1:33594). Sep 9 00:28:06.193414 kubelet[2764]: I0909 00:28:06.192937 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cn528" podStartSLOduration=66.19291319 podStartE2EDuration="1m6.19291319s" podCreationTimestamp="2025-09-09 00:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:28:06.085680239 +0000 UTC m=+70.676738739" watchObservedRunningTime="2025-09-09 00:28:06.19291319 +0000 UTC m=+70.783971690" Sep 9 00:28:06.212077 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 33594 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:06.215278 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:06.226597 systemd-logind[1517]: New session 9 of user core. Sep 9 00:28:06.231775 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:28:06.393814 sshd[5228]: Connection closed by 10.0.0.1 port 33594 Sep 9 00:28:06.394280 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:06.400276 systemd[1]: sshd@8-10.0.0.40:22-10.0.0.1:33594.service: Deactivated successfully. Sep 9 00:28:06.402838 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:28:06.404058 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:28:06.405648 systemd-logind[1517]: Removed session 9. Sep 9 00:28:07.029456 kubelet[2764]: E0909 00:28:07.029241 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:08.032444 kubelet[2764]: E0909 00:28:08.032381 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:09.126021 containerd[1569]: time="2025-09-09T00:28:09.125920426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:09.190988 containerd[1569]: time="2025-09-09T00:28:09.190871118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:28:09.272796 containerd[1569]: time="2025-09-09T00:28:09.272700336Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:09.331784 containerd[1569]: time="2025-09-09T00:28:09.331695462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:09.332682 containerd[1569]: time="2025-09-09T00:28:09.332624489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.549962207s" Sep 9 00:28:09.332682 containerd[1569]: time="2025-09-09T00:28:09.332670927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:28:09.333680 containerd[1569]: time="2025-09-09T00:28:09.333653215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:28:09.479973 containerd[1569]: time="2025-09-09T00:28:09.479871938Z" level=info msg="CreateContainer within sandbox \"d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:28:09.649988 containerd[1569]: time="2025-09-09T00:28:09.649932551Z" level=info msg="Container baa43dfd4d44a355c8155744cc77d1e523d3be743daa351aa6990340f3b7eeb5: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:09.677552 containerd[1569]: time="2025-09-09T00:28:09.677340014Z" level=info msg="CreateContainer within sandbox \"d4df13e989d7e1710694b367b6fe04e938f4ef0652aadb4bb44e0a3a8ed6a081\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"baa43dfd4d44a355c8155744cc77d1e523d3be743daa351aa6990340f3b7eeb5\"" Sep 9 00:28:09.679662 containerd[1569]: time="2025-09-09T00:28:09.679565607Z" level=info msg="StartContainer for \"baa43dfd4d44a355c8155744cc77d1e523d3be743daa351aa6990340f3b7eeb5\"" Sep 9 00:28:09.684080 containerd[1569]: time="2025-09-09T00:28:09.683986576Z" level=info msg="connecting to shim baa43dfd4d44a355c8155744cc77d1e523d3be743daa351aa6990340f3b7eeb5" address="unix:///run/containerd/s/93e266e45e9a77a43dc249034477d9c374918e08b27407303c5e78e8641a2ad0" protocol=ttrpc version=3 Sep 9 00:28:09.780000 systemd[1]: Started cri-containerd-baa43dfd4d44a355c8155744cc77d1e523d3be743daa351aa6990340f3b7eeb5.scope - libcontainer container baa43dfd4d44a355c8155744cc77d1e523d3be743daa351aa6990340f3b7eeb5. Sep 9 00:28:09.953831 containerd[1569]: time="2025-09-09T00:28:09.953773398Z" level=info msg="StartContainer for \"baa43dfd4d44a355c8155744cc77d1e523d3be743daa351aa6990340f3b7eeb5\" returns successfully" Sep 9 00:28:10.086271 kubelet[2764]: I0909 00:28:10.086098 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ddcccdf47-p9srl" podStartSLOduration=48.756826994 podStartE2EDuration="57.086046741s" podCreationTimestamp="2025-09-09 00:27:13 +0000 UTC" firstStartedPulling="2025-09-09 00:28:01.004302409 +0000 UTC m=+65.595360909" lastFinishedPulling="2025-09-09 00:28:09.333522126 +0000 UTC m=+73.924580656" observedRunningTime="2025-09-09 00:28:10.084457789 +0000 UTC m=+74.675516289" watchObservedRunningTime="2025-09-09 00:28:10.086046741 +0000 UTC m=+74.677105241" Sep 9 00:28:10.346959 containerd[1569]: time="2025-09-09T00:28:10.346784972Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:10.350406 containerd[1569]: time="2025-09-09T00:28:10.350353879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:28:10.357223 containerd[1569]: time="2025-09-09T00:28:10.357152526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.023464244s" Sep 9 00:28:10.357223 containerd[1569]: time="2025-09-09T00:28:10.357212921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:28:10.358616 containerd[1569]: time="2025-09-09T00:28:10.358536618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:28:10.364593 containerd[1569]: time="2025-09-09T00:28:10.364546335Z" level=info msg="CreateContainer within sandbox \"a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:28:10.393540 containerd[1569]: time="2025-09-09T00:28:10.392768418Z" level=info msg="Container f8fadbe87a44c7d9b6a0763dfe94fc6e6db32441f79b00b705e51ac2df9f5756: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:10.407998 containerd[1569]: time="2025-09-09T00:28:10.407933734Z" level=info msg="CreateContainer within sandbox \"a277ac33c167b64e6168d7f7f72665468090847cd73982d67465c1ae55179034\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f8fadbe87a44c7d9b6a0763dfe94fc6e6db32441f79b00b705e51ac2df9f5756\"" Sep 9 00:28:10.408850 containerd[1569]: time="2025-09-09T00:28:10.408803808Z" level=info msg="StartContainer for \"f8fadbe87a44c7d9b6a0763dfe94fc6e6db32441f79b00b705e51ac2df9f5756\"" Sep 9 00:28:10.410232 containerd[1569]: time="2025-09-09T00:28:10.410189173Z" level=info msg="connecting to shim f8fadbe87a44c7d9b6a0763dfe94fc6e6db32441f79b00b705e51ac2df9f5756" address="unix:///run/containerd/s/8998445561dac2e34c3a9e689778f13efbe20f3a966cc7cbe348c9d9a5b56057" protocol=ttrpc version=3 Sep 9 00:28:10.440914 systemd[1]: Started cri-containerd-f8fadbe87a44c7d9b6a0763dfe94fc6e6db32441f79b00b705e51ac2df9f5756.scope - libcontainer container f8fadbe87a44c7d9b6a0763dfe94fc6e6db32441f79b00b705e51ac2df9f5756. Sep 9 00:28:10.527426 kubelet[2764]: E0909 00:28:10.526556 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:10.532771 containerd[1569]: time="2025-09-09T00:28:10.532707646Z" level=info msg="StartContainer for \"f8fadbe87a44c7d9b6a0763dfe94fc6e6db32441f79b00b705e51ac2df9f5756\" returns successfully" Sep 9 00:28:11.042862 kubelet[2764]: I0909 00:28:11.042819 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:28:11.237247 kubelet[2764]: I0909 00:28:11.236917 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ddcccdf47-j5f98" podStartSLOduration=50.585505338 podStartE2EDuration="58.236882343s" podCreationTimestamp="2025-09-09 00:27:13 +0000 UTC" firstStartedPulling="2025-09-09 00:28:02.706961836 +0000 UTC m=+67.298020336" lastFinishedPulling="2025-09-09 00:28:10.358338821 +0000 UTC m=+74.949397341" observedRunningTime="2025-09-09 00:28:11.233294313 +0000 UTC m=+75.824352813" watchObservedRunningTime="2025-09-09 00:28:11.236882343 +0000 UTC m=+75.827940844" Sep 9 00:28:11.419237 systemd[1]: Started sshd@9-10.0.0.40:22-10.0.0.1:44984.service - OpenSSH per-connection server daemon (10.0.0.1:44984). Sep 9 00:28:11.500862 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 44984 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:11.516339 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:11.521698 systemd-logind[1517]: New session 10 of user core. Sep 9 00:28:11.529693 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:28:11.995084 sshd[5332]: Connection closed by 10.0.0.1 port 44984 Sep 9 00:28:11.995463 sshd-session[5329]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:11.999751 systemd[1]: sshd@9-10.0.0.40:22-10.0.0.1:44984.service: Deactivated successfully. Sep 9 00:28:12.002010 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:28:12.003139 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:28:12.004520 systemd-logind[1517]: Removed session 10. Sep 9 00:28:14.220409 kubelet[2764]: I0909 00:28:14.220345 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:28:15.036120 containerd[1569]: time="2025-09-09T00:28:15.036023343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:15.125522 containerd[1569]: time="2025-09-09T00:28:15.125426606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:28:15.163353 containerd[1569]: time="2025-09-09T00:28:15.163271845Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:15.196066 containerd[1569]: time="2025-09-09T00:28:15.195965684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:15.196934 containerd[1569]: time="2025-09-09T00:28:15.196878877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.838295521s" Sep 9 00:28:15.197010 containerd[1569]: time="2025-09-09T00:28:15.196938070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:28:15.198452 containerd[1569]: time="2025-09-09T00:28:15.198170921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:28:15.339956 containerd[1569]: time="2025-09-09T00:28:15.339789694Z" level=info msg="CreateContainer within sandbox \"d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:28:15.539900 containerd[1569]: time="2025-09-09T00:28:15.538992891Z" level=info msg="Container db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:17.008273 systemd[1]: Started sshd@10-10.0.0.40:22-10.0.0.1:44988.service - OpenSSH per-connection server daemon (10.0.0.1:44988). Sep 9 00:28:17.378852 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 44988 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:17.383733 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:17.391131 systemd-logind[1517]: New session 11 of user core. Sep 9 00:28:17.404792 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:28:17.441260 containerd[1569]: time="2025-09-09T00:28:17.441216208Z" level=info msg="CreateContainer within sandbox \"d634bf59a29b51fbd732ea8022d685c318edc481cca9d533e15fef5a16247ae9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98\"" Sep 9 00:28:17.441938 containerd[1569]: time="2025-09-09T00:28:17.441904263Z" level=info msg="StartContainer for \"db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98\"" Sep 9 00:28:17.443447 containerd[1569]: time="2025-09-09T00:28:17.443414649Z" level=info msg="connecting to shim db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98" address="unix:///run/containerd/s/e09f31d6966a43d8d22e71e942529191a8cc6caae31acb8d0a71ac7134f65691" protocol=ttrpc version=3 Sep 9 00:28:17.468958 systemd[1]: Started cri-containerd-db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98.scope - libcontainer container db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98. Sep 9 00:28:17.531935 kubelet[2764]: E0909 00:28:17.531869 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:17.692677 containerd[1569]: time="2025-09-09T00:28:17.692562262Z" level=info msg="StartContainer for \"db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98\" returns successfully" Sep 9 00:28:17.778104 sshd[5372]: Connection closed by 10.0.0.1 port 44988 Sep 9 00:28:17.780841 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:17.788782 systemd[1]: sshd@10-10.0.0.40:22-10.0.0.1:44988.service: Deactivated successfully. Sep 9 00:28:17.793406 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:28:17.797028 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:28:17.800756 systemd-logind[1517]: Removed session 11. Sep 9 00:28:18.195820 containerd[1569]: time="2025-09-09T00:28:18.195756096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98\" id:\"0460addbb02f077b2800681181c08f8b50135098b4219d354bed924699fde08a\" pid:5443 exited_at:{seconds:1757377698 nanos:195351718}" Sep 9 00:28:18.272722 kubelet[2764]: I0909 00:28:18.272640 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7947cbcf4b-vr8w7" podStartSLOduration=49.228714113 podStartE2EDuration="1m1.272615914s" podCreationTimestamp="2025-09-09 00:27:17 +0000 UTC" firstStartedPulling="2025-09-09 00:28:03.154136268 +0000 UTC m=+67.745194768" lastFinishedPulling="2025-09-09 00:28:15.198038069 +0000 UTC m=+79.789096569" observedRunningTime="2025-09-09 00:28:18.27222876 +0000 UTC m=+82.863287270" watchObservedRunningTime="2025-09-09 00:28:18.272615914 +0000 UTC m=+82.863674414" Sep 9 00:28:21.095432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956286365.mount: Deactivated successfully. Sep 9 00:28:22.178332 containerd[1569]: time="2025-09-09T00:28:22.177776195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3\" id:\"fed80b686d707dc1b9d4bc856ad368eadf048c5735cb3935a0664fbbed2f0d28\" pid:5474 exited_at:{seconds:1757377702 nanos:177069446}" Sep 9 00:28:23.025815 systemd[1]: Started sshd@11-10.0.0.40:22-10.0.0.1:59428.service - OpenSSH per-connection server daemon (10.0.0.1:59428). Sep 9 00:28:23.121482 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 59428 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:23.123957 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:23.134616 systemd-logind[1517]: New session 12 of user core. Sep 9 00:28:23.141770 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:28:23.619542 sshd[5491]: Connection closed by 10.0.0.1 port 59428 Sep 9 00:28:23.619832 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:23.630140 systemd[1]: sshd@11-10.0.0.40:22-10.0.0.1:59428.service: Deactivated successfully. Sep 9 00:28:23.632626 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:28:23.633625 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:28:23.637352 systemd[1]: Started sshd@12-10.0.0.40:22-10.0.0.1:59436.service - OpenSSH per-connection server daemon (10.0.0.1:59436). Sep 9 00:28:23.638182 systemd-logind[1517]: Removed session 12. Sep 9 00:28:23.784093 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 59436 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:23.786224 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:23.791413 systemd-logind[1517]: New session 13 of user core. Sep 9 00:28:23.802751 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:28:23.941989 containerd[1569]: time="2025-09-09T00:28:23.941436530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:24.027733 containerd[1569]: time="2025-09-09T00:28:24.027630275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:28:24.085114 containerd[1569]: time="2025-09-09T00:28:24.085034432Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:24.143732 containerd[1569]: time="2025-09-09T00:28:24.143055036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:24.144475 containerd[1569]: time="2025-09-09T00:28:24.144410664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.946183476s" Sep 9 00:28:24.144475 containerd[1569]: time="2025-09-09T00:28:24.144458104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:28:24.145639 containerd[1569]: time="2025-09-09T00:28:24.145591351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:28:24.176009 sshd[5513]: Connection closed by 10.0.0.1 port 59436 Sep 9 00:28:24.176455 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:24.187790 systemd[1]: sshd@12-10.0.0.40:22-10.0.0.1:59436.service: Deactivated successfully. Sep 9 00:28:24.190366 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:28:24.191591 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:28:24.196066 systemd[1]: Started sshd@13-10.0.0.40:22-10.0.0.1:59442.service - OpenSSH per-connection server daemon (10.0.0.1:59442). Sep 9 00:28:24.197366 systemd-logind[1517]: Removed session 13. Sep 9 00:28:24.210062 containerd[1569]: time="2025-09-09T00:28:24.209997310Z" level=info msg="CreateContainer within sandbox \"ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:28:24.270811 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 59442 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:24.272811 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:24.278307 systemd-logind[1517]: New session 14 of user core. Sep 9 00:28:24.284725 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:28:24.416605 sshd[5527]: Connection closed by 10.0.0.1 port 59442 Sep 9 00:28:24.417026 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:24.422978 systemd[1]: sshd@13-10.0.0.40:22-10.0.0.1:59442.service: Deactivated successfully. Sep 9 00:28:24.425422 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:28:24.426400 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:28:24.427694 systemd-logind[1517]: Removed session 14. Sep 9 00:28:24.721113 containerd[1569]: time="2025-09-09T00:28:24.721040904Z" level=info msg="Container dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:24.757716 containerd[1569]: time="2025-09-09T00:28:24.757650581Z" level=info msg="CreateContainer within sandbox \"ea34b6c5f61e386acdef6711de363f2429b8ed492a2dbc9d809eb52e8ea1202d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd\"" Sep 9 00:28:24.758516 containerd[1569]: time="2025-09-09T00:28:24.758400953Z" level=info msg="StartContainer for \"dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd\"" Sep 9 00:28:24.762015 containerd[1569]: time="2025-09-09T00:28:24.761961917Z" level=info msg="connecting to shim dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd" address="unix:///run/containerd/s/fce264a86fe639f87c4ac65bcf53547bec822e1a6649de5081e13e1f454bb717" protocol=ttrpc version=3 Sep 9 00:28:24.788774 systemd[1]: Started cri-containerd-dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd.scope - libcontainer container dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd. Sep 9 00:28:24.993284 containerd[1569]: time="2025-09-09T00:28:24.993148147Z" level=info msg="StartContainer for \"dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd\" returns successfully" Sep 9 00:28:25.207097 kubelet[2764]: I0909 00:28:25.206984 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-q52gn" podStartSLOduration=49.202944417 podStartE2EDuration="1m9.206956145s" podCreationTimestamp="2025-09-09 00:27:16 +0000 UTC" firstStartedPulling="2025-09-09 00:28:04.141363443 +0000 UTC m=+68.732421943" lastFinishedPulling="2025-09-09 00:28:24.14537517 +0000 UTC m=+88.736433671" observedRunningTime="2025-09-09 00:28:25.206928072 +0000 UTC m=+89.797986582" watchObservedRunningTime="2025-09-09 00:28:25.206956145 +0000 UTC m=+89.798014655" Sep 9 00:28:25.247210 containerd[1569]: time="2025-09-09T00:28:25.246946971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd\" id:\"77af24175fd70af2ac6255a5f4f8da94e999f1468cf167d16e7b65619af601be\" pid:5589 exit_status:1 exited_at:{seconds:1757377705 nanos:245559253}" Sep 9 00:28:25.526943 kubelet[2764]: E0909 00:28:25.526792 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:28:26.237531 containerd[1569]: time="2025-09-09T00:28:26.237460478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd\" id:\"8acf0800e4334d5a2f25192ecdc9e7bf9357960ffad711030fd5da7480e0a765\" pid:5615 exit_status:1 exited_at:{seconds:1757377706 nanos:237033400}" Sep 9 00:28:27.998115 containerd[1569]: time="2025-09-09T00:28:27.998034954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:27.999747 containerd[1569]: time="2025-09-09T00:28:27.999706659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:28:28.001303 containerd[1569]: time="2025-09-09T00:28:28.001254940Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:28.005183 containerd[1569]: time="2025-09-09T00:28:28.005117981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:28.005919 containerd[1569]: time="2025-09-09T00:28:28.005867850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.860237707s" Sep 9 00:28:28.005919 containerd[1569]: time="2025-09-09T00:28:28.005906323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:28:28.022389 containerd[1569]: time="2025-09-09T00:28:28.022224108Z" level=info msg="CreateContainer within sandbox \"28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:28:28.048192 containerd[1569]: time="2025-09-09T00:28:28.048119480Z" level=info msg="Container 3a7fb8e1684790394c79b57484dc031b9941372a977fe5b7b96e5c993aa36721: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:28.071696 containerd[1569]: time="2025-09-09T00:28:28.071625780Z" level=info msg="CreateContainer within sandbox \"28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3a7fb8e1684790394c79b57484dc031b9941372a977fe5b7b96e5c993aa36721\"" Sep 9 00:28:28.072342 containerd[1569]: time="2025-09-09T00:28:28.072310396Z" level=info msg="StartContainer for \"3a7fb8e1684790394c79b57484dc031b9941372a977fe5b7b96e5c993aa36721\"" Sep 9 00:28:28.074178 containerd[1569]: time="2025-09-09T00:28:28.074131803Z" level=info msg="connecting to shim 3a7fb8e1684790394c79b57484dc031b9941372a977fe5b7b96e5c993aa36721" address="unix:///run/containerd/s/e46c76bbd3d2713b0aa4aeaa2faa88a77aced9e33bb993084dd0a631a6560fa0" protocol=ttrpc version=3 Sep 9 00:28:28.101826 systemd[1]: Started cri-containerd-3a7fb8e1684790394c79b57484dc031b9941372a977fe5b7b96e5c993aa36721.scope - libcontainer container 3a7fb8e1684790394c79b57484dc031b9941372a977fe5b7b96e5c993aa36721. Sep 9 00:28:28.157321 containerd[1569]: time="2025-09-09T00:28:28.157270469Z" level=info msg="StartContainer for \"3a7fb8e1684790394c79b57484dc031b9941372a977fe5b7b96e5c993aa36721\" returns successfully" Sep 9 00:28:28.159416 containerd[1569]: time="2025-09-09T00:28:28.159374421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:28:29.435895 systemd[1]: Started sshd@14-10.0.0.40:22-10.0.0.1:59456.service - OpenSSH per-connection server daemon (10.0.0.1:59456). Sep 9 00:28:29.543956 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 59456 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:29.546408 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:29.552932 systemd-logind[1517]: New session 15 of user core. Sep 9 00:28:29.561856 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:28:29.850652 sshd[5666]: Connection closed by 10.0.0.1 port 59456 Sep 9 00:28:29.851004 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:29.857352 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:28:29.857789 systemd[1]: sshd@14-10.0.0.40:22-10.0.0.1:59456.service: Deactivated successfully. Sep 9 00:28:29.860652 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:28:29.862260 systemd-logind[1517]: Removed session 15. Sep 9 00:28:29.940067 containerd[1569]: time="2025-09-09T00:28:29.939970744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:29.941115 containerd[1569]: time="2025-09-09T00:28:29.941077258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:28:29.943328 containerd[1569]: time="2025-09-09T00:28:29.943258515Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:29.945417 containerd[1569]: time="2025-09-09T00:28:29.945361125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:28:29.945953 containerd[1569]: time="2025-09-09T00:28:29.945917757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.786503031s" Sep 9 00:28:29.946005 containerd[1569]: time="2025-09-09T00:28:29.945956661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:28:29.951859 containerd[1569]: time="2025-09-09T00:28:29.951780061Z" level=info msg="CreateContainer within sandbox \"28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:28:29.965475 containerd[1569]: time="2025-09-09T00:28:29.965415875Z" level=info msg="Container bec48225a51e7e8b4480d654455417a73930b094ef8c7bc294a8a31a839d1a11: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:28:29.979373 containerd[1569]: time="2025-09-09T00:28:29.979308033Z" level=info msg="CreateContainer within sandbox \"28d736cd61fede0e5c8c3fcea9b02ce03cea4e00abb0219a2f3f56fed62c228c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bec48225a51e7e8b4480d654455417a73930b094ef8c7bc294a8a31a839d1a11\"" Sep 9 00:28:29.980236 containerd[1569]: time="2025-09-09T00:28:29.980186376Z" level=info msg="StartContainer for \"bec48225a51e7e8b4480d654455417a73930b094ef8c7bc294a8a31a839d1a11\"" Sep 9 00:28:29.982070 containerd[1569]: time="2025-09-09T00:28:29.982043189Z" level=info msg="connecting to shim bec48225a51e7e8b4480d654455417a73930b094ef8c7bc294a8a31a839d1a11" address="unix:///run/containerd/s/e46c76bbd3d2713b0aa4aeaa2faa88a77aced9e33bb993084dd0a631a6560fa0" protocol=ttrpc version=3 Sep 9 00:28:30.017021 systemd[1]: Started cri-containerd-bec48225a51e7e8b4480d654455417a73930b094ef8c7bc294a8a31a839d1a11.scope - libcontainer container bec48225a51e7e8b4480d654455417a73930b094ef8c7bc294a8a31a839d1a11. Sep 9 00:28:30.071538 containerd[1569]: time="2025-09-09T00:28:30.071445594Z" level=info msg="StartContainer for \"bec48225a51e7e8b4480d654455417a73930b094ef8c7bc294a8a31a839d1a11\" returns successfully" Sep 9 00:28:30.651260 kubelet[2764]: I0909 00:28:30.651203 2764 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:28:30.652428 kubelet[2764]: I0909 00:28:30.652390 2764 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:28:34.870262 systemd[1]: Started sshd@15-10.0.0.40:22-10.0.0.1:41668.service - OpenSSH per-connection server daemon (10.0.0.1:41668). Sep 9 00:28:34.960374 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 41668 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:34.962245 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:34.966978 systemd-logind[1517]: New session 16 of user core. Sep 9 00:28:34.975669 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:28:35.221579 sshd[5732]: Connection closed by 10.0.0.1 port 41668 Sep 9 00:28:35.222674 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:35.227339 systemd[1]: sshd@15-10.0.0.40:22-10.0.0.1:41668.service: Deactivated successfully. Sep 9 00:28:35.229469 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:28:35.230358 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:28:35.232221 systemd-logind[1517]: Removed session 16. Sep 9 00:28:40.246450 systemd[1]: Started sshd@16-10.0.0.40:22-10.0.0.1:39448.service - OpenSSH per-connection server daemon (10.0.0.1:39448). Sep 9 00:28:40.308128 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 39448 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:40.309882 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:40.314500 systemd-logind[1517]: New session 17 of user core. Sep 9 00:28:40.318645 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:28:40.452578 sshd[5748]: Connection closed by 10.0.0.1 port 39448 Sep 9 00:28:40.452975 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:40.458121 systemd[1]: sshd@16-10.0.0.40:22-10.0.0.1:39448.service: Deactivated successfully. Sep 9 00:28:40.460609 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:28:40.461566 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:28:40.463249 systemd-logind[1517]: Removed session 17. Sep 9 00:28:45.469572 systemd[1]: Started sshd@17-10.0.0.40:22-10.0.0.1:39458.service - OpenSSH per-connection server daemon (10.0.0.1:39458). Sep 9 00:28:45.534655 sshd[5763]: Accepted publickey for core from 10.0.0.1 port 39458 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:45.536459 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:45.541096 systemd-logind[1517]: New session 18 of user core. Sep 9 00:28:45.549671 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:28:45.755169 sshd[5766]: Connection closed by 10.0.0.1 port 39458 Sep 9 00:28:45.755574 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:45.760894 systemd[1]: sshd@17-10.0.0.40:22-10.0.0.1:39458.service: Deactivated successfully. Sep 9 00:28:45.763336 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:28:45.764142 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:28:45.765553 systemd-logind[1517]: Removed session 18. Sep 9 00:28:48.205886 containerd[1569]: time="2025-09-09T00:28:48.205839817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98\" id:\"9ad06400430ae5f44c2750bcf00ccb4e6e4d82cc970fecaaf71b4e4c7823440f\" pid:5790 exited_at:{seconds:1757377728 nanos:205473325}" Sep 9 00:28:50.770466 systemd[1]: Started sshd@18-10.0.0.40:22-10.0.0.1:37720.service - OpenSSH per-connection server daemon (10.0.0.1:37720). Sep 9 00:28:50.838681 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 37720 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:50.840714 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:50.846851 systemd-logind[1517]: New session 19 of user core. Sep 9 00:28:50.856778 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:28:51.005102 sshd[5804]: Connection closed by 10.0.0.1 port 37720 Sep 9 00:28:51.005919 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:51.018410 systemd[1]: sshd@18-10.0.0.40:22-10.0.0.1:37720.service: Deactivated successfully. Sep 9 00:28:51.021274 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:28:51.024100 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:28:51.028317 systemd[1]: Started sshd@19-10.0.0.40:22-10.0.0.1:37724.service - OpenSSH per-connection server daemon (10.0.0.1:37724). Sep 9 00:28:51.029390 systemd-logind[1517]: Removed session 19. Sep 9 00:28:51.104525 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 37724 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:51.106479 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:51.112704 systemd-logind[1517]: New session 20 of user core. Sep 9 00:28:51.130796 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:28:51.574129 sshd[5820]: Connection closed by 10.0.0.1 port 37724 Sep 9 00:28:51.575875 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:51.585895 systemd[1]: sshd@19-10.0.0.40:22-10.0.0.1:37724.service: Deactivated successfully. Sep 9 00:28:51.588347 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:28:51.589539 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:28:51.593648 systemd[1]: Started sshd@20-10.0.0.40:22-10.0.0.1:37736.service - OpenSSH per-connection server daemon (10.0.0.1:37736). Sep 9 00:28:51.594598 systemd-logind[1517]: Removed session 20. Sep 9 00:28:51.685240 sshd[5832]: Accepted publickey for core from 10.0.0.1 port 37736 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:51.687827 sshd-session[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:51.694207 systemd-logind[1517]: New session 21 of user core. Sep 9 00:28:51.709906 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:28:52.123481 containerd[1569]: time="2025-09-09T00:28:52.123417433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"035d1a4c02090aaed4acba8a8f374954bb45be928d4083f3de9a68e66bf043f3\" id:\"cbc5da56b3dc966ef371c4dd2cd7c1e4fcaa1c0451aac40688c20d67a73ffa65\" pid:5857 exited_at:{seconds:1757377732 nanos:122962015}" Sep 9 00:28:52.425497 sshd[5835]: Connection closed by 10.0.0.1 port 37736 Sep 9 00:28:52.426745 sshd-session[5832]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:52.437060 systemd[1]: sshd@20-10.0.0.40:22-10.0.0.1:37736.service: Deactivated successfully. Sep 9 00:28:52.439406 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:28:52.440911 systemd-logind[1517]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:28:52.444446 systemd[1]: Started sshd@21-10.0.0.40:22-10.0.0.1:37752.service - OpenSSH per-connection server daemon (10.0.0.1:37752). Sep 9 00:28:52.445404 systemd-logind[1517]: Removed session 21. Sep 9 00:28:52.514672 sshd[5878]: Accepted publickey for core from 10.0.0.1 port 37752 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:52.517429 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:52.524902 systemd-logind[1517]: New session 22 of user core. Sep 9 00:28:52.532899 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:28:53.101158 sshd[5881]: Connection closed by 10.0.0.1 port 37752 Sep 9 00:28:53.101585 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:53.115251 systemd[1]: sshd@21-10.0.0.40:22-10.0.0.1:37752.service: Deactivated successfully. Sep 9 00:28:53.117968 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:28:53.119354 systemd-logind[1517]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:28:53.122734 systemd[1]: Started sshd@22-10.0.0.40:22-10.0.0.1:37756.service - OpenSSH per-connection server daemon (10.0.0.1:37756). Sep 9 00:28:53.124074 systemd-logind[1517]: Removed session 22. Sep 9 00:28:53.185034 sshd[5893]: Accepted publickey for core from 10.0.0.1 port 37756 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:53.186994 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:53.192581 systemd-logind[1517]: New session 23 of user core. Sep 9 00:28:53.203832 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:28:53.421787 sshd[5896]: Connection closed by 10.0.0.1 port 37756 Sep 9 00:28:53.422143 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:53.429437 systemd-logind[1517]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:28:53.430737 systemd[1]: sshd@22-10.0.0.40:22-10.0.0.1:37756.service: Deactivated successfully. Sep 9 00:28:53.433955 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:28:53.436523 systemd-logind[1517]: Removed session 23. Sep 9 00:28:56.249680 containerd[1569]: time="2025-09-09T00:28:56.249615217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dec7ae1d349921eb51135e6e8c94d558da1dfc2156c520a7cd576cf2f0b19afd\" id:\"25bca2a5d317f41c1e5e1780f1e798ab925d6c90042309bb676a450f63851550\" pid:5923 exited_at:{seconds:1757377736 nanos:249169587}" Sep 9 00:28:56.374644 kubelet[2764]: I0909 00:28:56.374572 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t4gtb" podStartSLOduration=75.058175523 podStartE2EDuration="1m40.374554131s" podCreationTimestamp="2025-09-09 00:27:16 +0000 UTC" firstStartedPulling="2025-09-09 00:28:04.630324266 +0000 UTC m=+69.221382766" lastFinishedPulling="2025-09-09 00:28:29.946702874 +0000 UTC m=+94.537761374" observedRunningTime="2025-09-09 00:28:30.189810159 +0000 UTC m=+94.780868689" watchObservedRunningTime="2025-09-09 00:28:56.374554131 +0000 UTC m=+120.965612631" Sep 9 00:28:58.438367 systemd[1]: Started sshd@23-10.0.0.40:22-10.0.0.1:37764.service - OpenSSH per-connection server daemon (10.0.0.1:37764). Sep 9 00:28:58.497897 sshd[5938]: Accepted publickey for core from 10.0.0.1 port 37764 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:28:58.500012 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:28:58.504972 systemd-logind[1517]: New session 24 of user core. Sep 9 00:28:58.514668 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:28:58.644926 sshd[5941]: Connection closed by 10.0.0.1 port 37764 Sep 9 00:28:58.645127 sshd-session[5938]: pam_unix(sshd:session): session closed for user core Sep 9 00:28:58.651745 systemd[1]: sshd@23-10.0.0.40:22-10.0.0.1:37764.service: Deactivated successfully. Sep 9 00:28:58.654849 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:28:58.656748 systemd-logind[1517]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:28:58.658584 systemd-logind[1517]: Removed session 24. Sep 9 00:28:59.422497 containerd[1569]: time="2025-09-09T00:28:59.422445956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db90649040f0a1de878925bdf86c21c9ce2e9ebdd222280804268bbfa3abdf98\" id:\"87c658695c1e9251ef1f246a7f28d9edf319a142dd101c82a24c637b6683faaa\" pid:5965 exited_at:{seconds:1757377739 nanos:421943128}" Sep 9 00:29:03.659924 systemd[1]: Started sshd@24-10.0.0.40:22-10.0.0.1:33154.service - OpenSSH per-connection server daemon (10.0.0.1:33154). Sep 9 00:29:03.722013 sshd[5979]: Accepted publickey for core from 10.0.0.1 port 33154 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:03.723938 sshd-session[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:03.728345 systemd-logind[1517]: New session 25 of user core. Sep 9 00:29:03.735651 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:29:03.850698 sshd[5982]: Connection closed by 10.0.0.1 port 33154 Sep 9 00:29:03.851162 sshd-session[5979]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:03.855714 systemd[1]: sshd@24-10.0.0.40:22-10.0.0.1:33154.service: Deactivated successfully. Sep 9 00:29:03.858220 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:29:03.859246 systemd-logind[1517]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:29:03.860475 systemd-logind[1517]: Removed session 25. Sep 9 00:29:04.526387 kubelet[2764]: E0909 00:29:04.526342 2764 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:29:08.865222 systemd[1]: Started sshd@25-10.0.0.40:22-10.0.0.1:33162.service - OpenSSH per-connection server daemon (10.0.0.1:33162). Sep 9 00:29:08.944548 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 33162 ssh2: RSA SHA256:bPnLNrqsOVznMenI9efnEoSwwVCualUnx9uITn7hqbA Sep 9 00:29:08.946610 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:29:08.952776 systemd-logind[1517]: New session 26 of user core. Sep 9 00:29:08.963765 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:29:09.339914 sshd[5999]: Connection closed by 10.0.0.1 port 33162 Sep 9 00:29:09.340494 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Sep 9 00:29:09.347354 systemd[1]: sshd@25-10.0.0.40:22-10.0.0.1:33162.service: Deactivated successfully. Sep 9 00:29:09.350475 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:29:09.353344 systemd-logind[1517]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:29:09.355145 systemd-logind[1517]: Removed session 26.