Sep 16 04:53:33.900361 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:53:33.900385 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:33.900398 kernel: BIOS-provided physical RAM map: Sep 16 04:53:33.900405 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 16 04:53:33.900411 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 16 04:53:33.900418 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 16 04:53:33.900426 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 16 04:53:33.900433 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 16 04:53:33.900442 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 16 04:53:33.900449 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 16 04:53:33.900456 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 16 04:53:33.900465 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 16 04:53:33.900485 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 16 04:53:33.900492 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 16 04:53:33.900501 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 16 04:53:33.900516 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 16 04:53:33.900542 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 04:53:33.900561 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:53:33.900583 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 04:53:33.900593 kernel: NX (Execute Disable) protection: active Sep 16 04:53:33.900600 kernel: APIC: Static calls initialized Sep 16 04:53:33.900607 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 16 04:53:33.900614 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 16 04:53:33.900621 kernel: extended physical RAM map: Sep 16 04:53:33.900629 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 16 04:53:33.900636 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 16 04:53:33.900643 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 16 04:53:33.900661 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 16 04:53:33.900678 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 16 04:53:33.900694 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 16 04:53:33.900701 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 16 04:53:33.900709 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 16 04:53:33.900716 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 16 04:53:33.900723 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 16 04:53:33.900730 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 16 04:53:33.900737 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 16 04:53:33.900744 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 16 04:53:33.900752 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 16 04:53:33.900762 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 16 04:53:33.900769 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 16 04:53:33.900780 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 16 04:53:33.900802 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 16 04:53:33.900820 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 16 04:53:33.900827 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 16 04:53:33.900838 kernel: efi: EFI v2.7 by EDK II Sep 16 04:53:33.900845 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 16 04:53:33.900853 kernel: random: crng init done Sep 16 04:53:33.900860 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 16 04:53:33.900868 kernel: secureboot: Secure boot enabled Sep 16 04:53:33.900875 kernel: SMBIOS 2.8 present. Sep 16 04:53:33.900882 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 16 04:53:33.900890 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:53:33.900897 kernel: Hypervisor detected: KVM Sep 16 04:53:33.900905 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:53:33.900912 kernel: kvm-clock: using sched offset of 6202770653 cycles Sep 16 04:53:33.900923 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:53:33.900937 kernel: tsc: Detected 2794.748 MHz processor Sep 16 04:53:33.900946 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:53:33.900953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:53:33.900961 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 16 04:53:33.900969 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 04:53:33.900980 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:53:33.900990 kernel: Using GB pages for direct mapping Sep 16 04:53:33.900999 kernel: ACPI: Early table checksum verification disabled Sep 16 04:53:33.901009 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 16 04:53:33.901017 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 16 04:53:33.901025 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:33.901033 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:33.901040 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 16 04:53:33.901048 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:33.901056 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:33.901063 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:33.901071 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 16 04:53:33.901081 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 16 04:53:33.901089 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 16 04:53:33.901096 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 16 04:53:33.901104 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 16 04:53:33.901112 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 16 04:53:33.901119 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 16 04:53:33.901127 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 16 04:53:33.901134 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 16 04:53:33.901142 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 16 04:53:33.901152 kernel: No NUMA configuration found Sep 16 04:53:33.901160 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 16 04:53:33.901167 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 16 04:53:33.901175 kernel: Zone ranges: Sep 16 04:53:33.901183 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:53:33.901190 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 16 04:53:33.901198 kernel: Normal empty Sep 16 04:53:33.901205 kernel: Device empty Sep 16 04:53:33.901213 kernel: Movable zone start for each node Sep 16 04:53:33.901222 kernel: Early memory node ranges Sep 16 04:53:33.901230 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 16 04:53:33.901238 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 16 04:53:33.901245 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 16 04:53:33.901253 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 16 04:53:33.901260 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 16 04:53:33.901268 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 16 04:53:33.901275 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:53:33.901283 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 16 04:53:33.901293 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 16 04:53:33.901301 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 16 04:53:33.901308 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 16 04:53:33.901316 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 16 04:53:33.901324 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 16 04:53:33.901331 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:53:33.901339 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 16 04:53:33.901346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 16 04:53:33.901354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:53:33.901366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:53:33.901374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:53:33.901381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:53:33.901389 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:53:33.901396 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 04:53:33.901404 kernel: TSC deadline timer available Sep 16 04:53:33.901411 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:53:33.901419 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:53:33.901426 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:53:33.901443 kernel: CPU topo: Max. threads per core: 1 Sep 16 04:53:33.901451 kernel: CPU topo: Num. cores per package: 4 Sep 16 04:53:33.901459 kernel: CPU topo: Num. threads per package: 4 Sep 16 04:53:33.901469 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 16 04:53:33.901479 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 04:53:33.901487 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 16 04:53:33.901495 kernel: kvm-guest: setup PV sched yield Sep 16 04:53:33.901503 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 16 04:53:33.901513 kernel: Booting paravirtualized kernel on KVM Sep 16 04:53:33.901521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:53:33.901529 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 16 04:53:33.901537 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 16 04:53:33.901545 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 16 04:53:33.901553 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 16 04:53:33.901560 kernel: kvm-guest: PV spinlocks enabled Sep 16 04:53:33.901568 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:53:33.901577 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:33.901588 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:53:33.901596 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:53:33.901604 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:53:33.901612 kernel: Fallback order for Node 0: 0 Sep 16 04:53:33.901620 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 16 04:53:33.901628 kernel: Policy zone: DMA32 Sep 16 04:53:33.901635 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:53:33.901643 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 16 04:53:33.901654 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:53:33.901662 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:53:33.901670 kernel: Dynamic Preempt: voluntary Sep 16 04:53:33.901677 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:53:33.901686 kernel: rcu: RCU event tracing is enabled. Sep 16 04:53:33.901694 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 16 04:53:33.901702 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:53:33.901710 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:53:33.901718 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:53:33.901726 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:53:33.901737 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 16 04:53:33.901745 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:53:33.901753 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:53:33.901764 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 16 04:53:33.901772 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 16 04:53:33.901780 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:53:33.901801 kernel: Console: colour dummy device 80x25 Sep 16 04:53:33.901809 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:53:33.901817 kernel: ACPI: Core revision 20240827 Sep 16 04:53:33.901828 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 16 04:53:33.901836 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:53:33.901844 kernel: x2apic enabled Sep 16 04:53:33.901852 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:53:33.901860 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 16 04:53:33.901868 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 16 04:53:33.901876 kernel: kvm-guest: setup PV IPIs Sep 16 04:53:33.901883 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 16 04:53:33.901891 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 16 04:53:33.901902 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 16 04:53:33.901910 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 16 04:53:33.901918 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 16 04:53:33.901926 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 16 04:53:33.901943 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:53:33.901951 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:53:33.901959 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:53:33.901967 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 16 04:53:33.901975 kernel: active return thunk: retbleed_return_thunk Sep 16 04:53:33.901986 kernel: RETBleed: Mitigation: untrained return thunk Sep 16 04:53:33.901994 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 16 04:53:33.902002 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 16 04:53:33.902010 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 16 04:53:33.902018 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 16 04:53:33.902026 kernel: active return thunk: srso_return_thunk Sep 16 04:53:33.902038 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 16 04:53:33.902046 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:53:33.902062 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:53:33.902076 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:53:33.902093 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:53:33.902104 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 16 04:53:33.902112 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:53:33.902119 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:53:33.902127 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:53:33.902135 kernel: landlock: Up and running. Sep 16 04:53:33.902143 kernel: SELinux: Initializing. Sep 16 04:53:33.902154 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:53:33.902162 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:53:33.902170 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 16 04:53:33.902178 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 16 04:53:33.902186 kernel: ... version: 0 Sep 16 04:53:33.902196 kernel: ... bit width: 48 Sep 16 04:53:33.902204 kernel: ... generic registers: 6 Sep 16 04:53:33.902212 kernel: ... value mask: 0000ffffffffffff Sep 16 04:53:33.902220 kernel: ... max period: 00007fffffffffff Sep 16 04:53:33.902231 kernel: ... fixed-purpose events: 0 Sep 16 04:53:33.902239 kernel: ... event mask: 000000000000003f Sep 16 04:53:33.902247 kernel: signal: max sigframe size: 1776 Sep 16 04:53:33.902254 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:53:33.902263 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:53:33.902271 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:53:33.902279 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:53:33.902286 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:53:33.902294 kernel: .... node #0, CPUs: #1 #2 #3 Sep 16 04:53:33.902305 kernel: smp: Brought up 1 node, 4 CPUs Sep 16 04:53:33.902313 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 16 04:53:33.902321 kernel: Memory: 2409224K/2552216K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 137064K reserved, 0K cma-reserved) Sep 16 04:53:33.902329 kernel: devtmpfs: initialized Sep 16 04:53:33.902337 kernel: x86/mm: Memory block size: 128MB Sep 16 04:53:33.902345 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 16 04:53:33.902353 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 16 04:53:33.902361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:53:33.902369 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 16 04:53:33.902390 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:53:33.902398 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:53:33.902414 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:53:33.902423 kernel: audit: type=2000 audit(1757998412.125:1): state=initialized audit_enabled=0 res=1 Sep 16 04:53:33.902431 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:53:33.902439 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:53:33.902447 kernel: cpuidle: using governor menu Sep 16 04:53:33.902455 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:53:33.902463 kernel: dca service started, version 1.12.1 Sep 16 04:53:33.902475 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 16 04:53:33.902483 kernel: PCI: Using configuration type 1 for base access Sep 16 04:53:33.902491 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:53:33.902499 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:53:33.902507 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:53:33.902515 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:53:33.902523 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:53:33.902531 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:53:33.902539 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:53:33.902549 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:53:33.902557 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:53:33.902565 kernel: ACPI: Interpreter enabled Sep 16 04:53:33.902573 kernel: ACPI: PM: (supports S0 S5) Sep 16 04:53:33.902581 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:53:33.902589 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:53:33.902597 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 04:53:33.902605 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 16 04:53:33.902613 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:53:33.902858 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:53:33.902998 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 16 04:53:33.903119 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 16 04:53:33.903129 kernel: PCI host bridge to bus 0000:00 Sep 16 04:53:33.903264 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:53:33.903376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:53:33.903493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:53:33.903602 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 16 04:53:33.903712 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 16 04:53:33.903840 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 16 04:53:33.904028 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:53:33.904249 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:53:33.904398 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 16 04:53:33.904531 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 16 04:53:33.904659 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 16 04:53:33.904778 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 16 04:53:33.904919 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 04:53:33.905069 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 16 04:53:33.905207 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 16 04:53:33.905401 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 16 04:53:33.905527 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 16 04:53:33.905664 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 16 04:53:33.905810 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 16 04:53:33.905961 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 16 04:53:33.906098 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 16 04:53:33.906237 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 16 04:53:33.906366 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 16 04:53:33.906485 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 16 04:53:33.906604 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 16 04:53:33.906732 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 16 04:53:33.906907 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:53:33.907057 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 16 04:53:33.907198 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 16 04:53:33.907326 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 16 04:53:33.907454 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 16 04:53:33.907593 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 16 04:53:33.907715 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 16 04:53:33.907725 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:53:33.907733 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:53:33.907742 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:53:33.907754 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:53:33.907762 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 16 04:53:33.907770 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 16 04:53:33.907778 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 16 04:53:33.907800 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 16 04:53:33.907819 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 16 04:53:33.907827 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 16 04:53:33.907835 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 16 04:53:33.907843 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 16 04:53:33.907854 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 16 04:53:33.907862 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 16 04:53:33.907870 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 16 04:53:33.907878 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 16 04:53:33.907886 kernel: iommu: Default domain type: Translated Sep 16 04:53:33.907894 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:53:33.907902 kernel: efivars: Registered efivars operations Sep 16 04:53:33.907909 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:53:33.907917 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:53:33.907927 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 16 04:53:33.907943 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 16 04:53:33.907951 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 16 04:53:33.907959 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 16 04:53:33.907967 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 16 04:53:33.908092 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 16 04:53:33.908213 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 16 04:53:33.908332 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 04:53:33.908342 kernel: vgaarb: loaded Sep 16 04:53:33.908354 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 16 04:53:33.908362 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 16 04:53:33.908370 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:53:33.908378 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:53:33.908386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:53:33.908394 kernel: pnp: PnP ACPI init Sep 16 04:53:33.908603 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 16 04:53:33.908615 kernel: pnp: PnP ACPI: found 6 devices Sep 16 04:53:33.908628 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:53:33.908636 kernel: NET: Registered PF_INET protocol family Sep 16 04:53:33.908644 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:53:33.908652 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:53:33.908660 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:53:33.908668 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:53:33.908675 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:53:33.908683 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:53:33.908691 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:53:33.908702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:53:33.908710 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:53:33.908717 kernel: NET: Registered PF_XDP protocol family Sep 16 04:53:33.908863 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 16 04:53:33.908996 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 16 04:53:33.909108 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:53:33.909219 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:53:33.909330 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:53:33.909446 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 16 04:53:33.909570 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 16 04:53:33.909683 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 16 04:53:33.909693 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:53:33.909701 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 16 04:53:33.909709 kernel: Initialise system trusted keyrings Sep 16 04:53:33.909717 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:53:33.909725 kernel: Key type asymmetric registered Sep 16 04:53:33.909737 kernel: Asymmetric key parser 'x509' registered Sep 16 04:53:33.909760 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:53:33.909771 kernel: io scheduler mq-deadline registered Sep 16 04:53:33.909780 kernel: io scheduler kyber registered Sep 16 04:53:33.909800 kernel: io scheduler bfq registered Sep 16 04:53:33.909819 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:53:33.909828 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 16 04:53:33.909836 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 16 04:53:33.909844 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 16 04:53:33.909856 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:53:33.909864 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:53:33.909873 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:53:33.909881 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:53:33.909889 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:53:33.910037 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 16 04:53:33.910049 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:53:33.910163 kernel: rtc_cmos 00:04: registered as rtc0 Sep 16 04:53:33.910282 kernel: rtc_cmos 00:04: setting system clock to 2025-09-16T04:53:33 UTC (1757998413) Sep 16 04:53:33.910396 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 16 04:53:33.910406 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 16 04:53:33.910414 kernel: efifb: probing for efifb Sep 16 04:53:33.910422 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 16 04:53:33.910431 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 16 04:53:33.910439 kernel: efifb: scrolling: redraw Sep 16 04:53:33.910447 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:53:33.910456 kernel: Console: switching to colour frame buffer device 160x50 Sep 16 04:53:33.910467 kernel: fb0: EFI VGA frame buffer device Sep 16 04:53:33.910478 kernel: pstore: Using crash dump compression: deflate Sep 16 04:53:33.910486 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:53:33.910495 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:53:33.910503 kernel: Segment Routing with IPv6 Sep 16 04:53:33.910511 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:53:33.910522 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:53:33.910530 kernel: Key type dns_resolver registered Sep 16 04:53:33.910538 kernel: IPI shorthand broadcast: enabled Sep 16 04:53:33.910546 kernel: sched_clock: Marking stable (3206002173, 142079086)->(3373422060, -25340801) Sep 16 04:53:33.910554 kernel: registered taskstats version 1 Sep 16 04:53:33.910562 kernel: Loading compiled-in X.509 certificates Sep 16 04:53:33.910571 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:53:33.910579 kernel: Demotion targets for Node 0: null Sep 16 04:53:33.910587 kernel: Key type .fscrypt registered Sep 16 04:53:33.910598 kernel: Key type fscrypt-provisioning registered Sep 16 04:53:33.910606 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:53:33.910614 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:53:33.910622 kernel: ima: No architecture policies found Sep 16 04:53:33.910630 kernel: clk: Disabling unused clocks Sep 16 04:53:33.910638 kernel: Warning: unable to open an initial console. Sep 16 04:53:33.910647 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:53:33.910655 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:53:33.910666 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:53:33.910674 kernel: Run /init as init process Sep 16 04:53:33.910682 kernel: with arguments: Sep 16 04:53:33.910690 kernel: /init Sep 16 04:53:33.910698 kernel: with environment: Sep 16 04:53:33.910706 kernel: HOME=/ Sep 16 04:53:33.910717 kernel: TERM=linux Sep 16 04:53:33.910725 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:53:33.910734 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:53:33.910748 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:53:33.910758 systemd[1]: Detected virtualization kvm. Sep 16 04:53:33.910766 systemd[1]: Detected architecture x86-64. Sep 16 04:53:33.910775 systemd[1]: Running in initrd. Sep 16 04:53:33.910783 systemd[1]: No hostname configured, using default hostname. Sep 16 04:53:33.910809 systemd[1]: Hostname set to . Sep 16 04:53:33.910828 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:53:33.910843 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:53:33.910852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:33.910861 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:33.910870 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:53:33.910879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:53:33.910888 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:53:33.910898 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:53:33.910910 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:53:33.910919 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:53:33.910928 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:33.910945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:33.910954 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:53:33.910963 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:53:33.910972 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:53:33.910980 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:53:33.910991 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:53:33.911000 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:53:33.911009 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:53:33.911018 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:53:33.911027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:33.911035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:33.911044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:33.911053 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:53:33.911061 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:53:33.911073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:53:33.911081 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:53:33.911090 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:53:33.911099 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:53:33.911108 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:53:33.911117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:53:33.911126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:33.911134 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:53:33.911146 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:33.911155 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:53:33.911164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:53:33.911205 systemd-journald[221]: Collecting audit messages is disabled. Sep 16 04:53:33.911228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:33.911237 systemd-journald[221]: Journal started Sep 16 04:53:33.911257 systemd-journald[221]: Runtime Journal (/run/log/journal/cc44cbd52dc54668b181da8d6acb6961) is 6M, max 48.2M, 42.2M free. Sep 16 04:53:33.902521 systemd-modules-load[223]: Inserted module 'overlay' Sep 16 04:53:33.914118 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:53:33.917659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:53:33.920489 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:53:33.923646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:53:33.928025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:53:33.960821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:53:33.962394 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 16 04:53:33.963329 kernel: Bridge firewalling registered Sep 16 04:53:33.966027 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:33.967533 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:33.978366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:33.978459 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:53:33.984653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:33.987370 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:53:33.989826 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:53:33.991099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:33.995124 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:53:34.019134 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:53:34.039021 systemd-resolved[262]: Positive Trust Anchors: Sep 16 04:53:34.039040 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:53:34.039069 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:53:34.041658 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 16 04:53:34.047487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:53:34.048706 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:34.129865 kernel: SCSI subsystem initialized Sep 16 04:53:34.139821 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:53:34.149826 kernel: iscsi: registered transport (tcp) Sep 16 04:53:34.174054 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:53:34.174133 kernel: QLogic iSCSI HBA Driver Sep 16 04:53:34.201671 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:53:34.231915 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:34.233200 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:53:34.304182 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:53:34.307359 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:53:34.364836 kernel: raid6: avx2x4 gen() 29938 MB/s Sep 16 04:53:34.381821 kernel: raid6: avx2x2 gen() 30981 MB/s Sep 16 04:53:34.398861 kernel: raid6: avx2x1 gen() 24767 MB/s Sep 16 04:53:34.398893 kernel: raid6: using algorithm avx2x2 gen() 30981 MB/s Sep 16 04:53:34.416878 kernel: raid6: .... xor() 19762 MB/s, rmw enabled Sep 16 04:53:34.416903 kernel: raid6: using avx2x2 recovery algorithm Sep 16 04:53:34.439841 kernel: xor: automatically using best checksumming function avx Sep 16 04:53:34.607841 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:53:34.616304 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:53:34.620114 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:34.660952 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 16 04:53:34.666639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:34.670699 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:53:34.702651 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 16 04:53:34.735136 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:53:34.738998 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:53:34.832471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:35.123564 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:53:35.127806 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 16 04:53:35.130384 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 16 04:53:35.131825 kernel: AES CTR mode by8 optimization enabled Sep 16 04:53:35.136113 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:53:35.144817 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:53:35.144842 kernel: GPT:9289727 != 19775487 Sep 16 04:53:35.144861 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:53:35.144881 kernel: GPT:9289727 != 19775487 Sep 16 04:53:35.144924 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:53:35.144955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:53:35.137529 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:35.150221 kernel: libata version 3.00 loaded. Sep 16 04:53:35.150242 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 04:53:35.137707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:35.145300 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:35.154222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:35.164108 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:35.183291 kernel: ahci 0000:00:1f.2: version 3.0 Sep 16 04:53:35.183597 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 16 04:53:35.187668 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 16 04:53:35.187967 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 16 04:53:35.188263 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 16 04:53:35.198817 kernel: scsi host0: ahci Sep 16 04:53:35.199049 kernel: scsi host1: ahci Sep 16 04:53:35.199829 kernel: scsi host2: ahci Sep 16 04:53:35.200088 kernel: scsi host3: ahci Sep 16 04:53:35.201810 kernel: scsi host4: ahci Sep 16 04:53:35.202004 kernel: scsi host5: ahci Sep 16 04:53:35.203663 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 16 04:53:35.210587 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 16 04:53:35.210609 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 16 04:53:35.210620 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 16 04:53:35.210631 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 16 04:53:35.210641 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 16 04:53:35.210652 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 16 04:53:35.233010 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 16 04:53:35.242496 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:53:35.250824 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 16 04:53:35.250921 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 16 04:53:35.254143 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:53:35.258389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:35.258472 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:35.261556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:35.278812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:35.281442 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:35.290180 disk-uuid[635]: Primary Header is updated. Sep 16 04:53:35.290180 disk-uuid[635]: Secondary Entries is updated. Sep 16 04:53:35.290180 disk-uuid[635]: Secondary Header is updated. Sep 16 04:53:35.294883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:53:35.305718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:35.521152 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:35.521239 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 16 04:53:35.521252 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:35.521262 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:35.522825 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:35.522911 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 04:53:35.523937 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 16 04:53:35.523958 kernel: ata3.00: applying bridge limits Sep 16 04:53:35.524820 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 16 04:53:35.525861 kernel: ata3.00: LPM support broken, forcing max_power Sep 16 04:53:35.525954 kernel: ata3.00: configured for UDMA/100 Sep 16 04:53:35.526824 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 16 04:53:35.575372 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 16 04:53:35.575638 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:53:35.589815 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 16 04:53:35.879202 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:53:35.881254 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:53:35.896519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:35.899061 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:53:35.902284 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:53:35.936261 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:53:36.302821 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 16 04:53:36.303704 disk-uuid[638]: The operation has completed successfully. Sep 16 04:53:36.332669 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:53:36.332824 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:53:36.371979 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:53:36.398734 sh[670]: Success Sep 16 04:53:36.417567 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:53:36.417651 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:53:36.417676 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:53:36.426823 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 16 04:53:36.462315 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:53:36.466483 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:53:36.492560 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:53:36.497623 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (682) Sep 16 04:53:36.497649 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:53:36.497660 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:36.503163 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:53:36.503185 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:53:36.504777 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:53:36.507206 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:53:36.509415 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:53:36.510344 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:53:36.513960 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:53:36.550936 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Sep 16 04:53:36.551001 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:36.552066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:36.555386 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:53:36.555423 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:53:36.560813 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:36.561895 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:53:36.564011 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:53:36.688064 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:53:36.693154 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:53:36.693956 ignition[758]: Ignition 2.22.0 Sep 16 04:53:36.693969 ignition[758]: Stage: fetch-offline Sep 16 04:53:36.694007 ignition[758]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:36.694016 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:53:36.694110 ignition[758]: parsed url from cmdline: "" Sep 16 04:53:36.694115 ignition[758]: no config URL provided Sep 16 04:53:36.694122 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:53:36.694132 ignition[758]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:53:36.694162 ignition[758]: op(1): [started] loading QEMU firmware config module Sep 16 04:53:36.694168 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 16 04:53:36.702759 ignition[758]: op(1): [finished] loading QEMU firmware config module Sep 16 04:53:36.742528 ignition[758]: parsing config with SHA512: 9e26d0aac69b727577bc1ec783425259be2c335cc108466f8f673ed3d950f08432468335b53b087fcc7d0e7c9c5651e223752ef46c51f16cce5fb7abe3bdde05 Sep 16 04:53:36.744162 systemd-networkd[857]: lo: Link UP Sep 16 04:53:36.744175 systemd-networkd[857]: lo: Gained carrier Sep 16 04:53:36.746070 systemd-networkd[857]: Enumeration completed Sep 16 04:53:36.746539 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:36.746545 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:36.748274 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:53:36.748941 systemd-networkd[857]: eth0: Link UP Sep 16 04:53:36.749078 systemd-networkd[857]: eth0: Gained carrier Sep 16 04:53:36.749088 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:36.750694 systemd[1]: Reached target network.target - Network. Sep 16 04:53:36.755479 ignition[758]: fetch-offline: fetch-offline passed Sep 16 04:53:36.755040 unknown[758]: fetched base config from "system" Sep 16 04:53:36.755548 ignition[758]: Ignition finished successfully Sep 16 04:53:36.755048 unknown[758]: fetched user config from "qemu" Sep 16 04:53:36.760965 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:53:36.762395 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 16 04:53:36.762858 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:53:36.763326 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:53:36.836732 ignition[866]: Ignition 2.22.0 Sep 16 04:53:36.836746 ignition[866]: Stage: kargs Sep 16 04:53:36.836911 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:36.836922 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:53:36.837598 ignition[866]: kargs: kargs passed Sep 16 04:53:36.837641 ignition[866]: Ignition finished successfully Sep 16 04:53:36.841912 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:53:36.845031 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:53:36.910918 ignition[875]: Ignition 2.22.0 Sep 16 04:53:36.910932 ignition[875]: Stage: disks Sep 16 04:53:36.911114 ignition[875]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:36.911125 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:53:36.912203 ignition[875]: disks: disks passed Sep 16 04:53:36.912254 ignition[875]: Ignition finished successfully Sep 16 04:53:36.915860 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:53:36.916160 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:53:36.916423 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:53:36.916730 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:53:36.917065 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:53:36.917383 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:53:36.918686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:53:36.951379 systemd-fsck[886]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:53:36.958878 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:53:36.960033 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:53:37.080818 kernel: EXT4-fs (vda9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:53:37.081344 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:53:37.082036 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:53:37.085516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:53:37.086421 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:53:37.088174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:53:37.088217 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:53:37.088242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:53:37.111635 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:53:37.115222 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:53:37.119920 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Sep 16 04:53:37.119941 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:37.119953 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:37.121910 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:53:37.121929 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:53:37.124240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:53:37.155761 initrd-setup-root[918]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:53:37.160818 initrd-setup-root[925]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:53:37.166413 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:53:37.171178 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:53:37.264338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:53:37.266667 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:53:37.267419 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:53:37.293833 kernel: BTRFS info (device vda6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:37.315998 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:53:37.386415 ignition[1008]: INFO : Ignition 2.22.0 Sep 16 04:53:37.386415 ignition[1008]: INFO : Stage: mount Sep 16 04:53:37.388474 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:37.388474 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:53:37.388474 ignition[1008]: INFO : mount: mount passed Sep 16 04:53:37.388474 ignition[1008]: INFO : Ignition finished successfully Sep 16 04:53:37.394559 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:53:37.396937 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:53:37.496406 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:53:37.498317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:53:37.525840 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1020) Sep 16 04:53:37.525906 kernel: BTRFS info (device vda6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:53:37.527608 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:53:37.532201 kernel: BTRFS info (device vda6): turning on async discard Sep 16 04:53:37.532235 kernel: BTRFS info (device vda6): enabling free space tree Sep 16 04:53:37.534546 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:53:37.596375 ignition[1037]: INFO : Ignition 2.22.0 Sep 16 04:53:37.596375 ignition[1037]: INFO : Stage: files Sep 16 04:53:37.598405 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:37.598405 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:53:37.598405 ignition[1037]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:53:37.601678 ignition[1037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:53:37.601678 ignition[1037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:53:37.606926 ignition[1037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:53:37.609180 ignition[1037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:53:37.611047 unknown[1037]: wrote ssh authorized keys file for user: core Sep 16 04:53:37.612251 ignition[1037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:53:37.614713 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:53:37.616689 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 16 04:53:37.663040 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:53:38.026382 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 16 04:53:38.026382 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:53:38.030154 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:53:38.030154 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:53:38.030154 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:53:38.030154 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:53:38.030154 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:53:38.030154 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:53:38.030154 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:53:38.042635 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:53:38.042635 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:53:38.042635 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:53:38.042635 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:53:38.042635 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:53:38.042635 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 16 04:53:38.336195 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:53:38.368938 systemd-networkd[857]: eth0: Gained IPv6LL Sep 16 04:53:38.753964 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 16 04:53:38.753964 ignition[1037]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:53:38.757727 ignition[1037]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:53:38.843579 ignition[1037]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:53:38.843579 ignition[1037]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:53:38.843579 ignition[1037]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 16 04:53:38.843579 ignition[1037]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:53:38.850904 ignition[1037]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 16 04:53:38.850904 ignition[1037]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 16 04:53:38.850904 ignition[1037]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 16 04:53:38.864186 ignition[1037]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:53:38.867994 ignition[1037]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 16 04:53:38.869668 ignition[1037]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 16 04:53:38.869668 ignition[1037]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:53:38.872368 ignition[1037]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:53:38.872368 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:53:38.872368 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:53:38.872368 ignition[1037]: INFO : files: files passed Sep 16 04:53:38.872368 ignition[1037]: INFO : Ignition finished successfully Sep 16 04:53:38.876814 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:53:38.878127 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:53:38.880832 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:53:38.893091 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:53:38.893219 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:53:38.896960 initrd-setup-root-after-ignition[1066]: grep: /sysroot/oem/oem-release: No such file or directory Sep 16 04:53:38.901572 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:38.901572 initrd-setup-root-after-ignition[1068]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:38.906265 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:53:38.904277 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:53:38.906441 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:53:38.911348 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:53:38.962095 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:53:38.962269 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:53:38.964516 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:53:38.965495 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:53:38.967323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:53:38.968248 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:53:39.005209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:53:39.007716 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:53:39.033440 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:39.035612 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:39.035768 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:53:39.037878 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:53:39.038015 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:53:39.072551 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:53:39.072704 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:53:39.076057 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:53:39.076199 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:53:39.076546 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:53:39.076907 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:53:39.077343 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:53:39.077679 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:53:39.078170 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:53:39.078470 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:53:39.078769 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:53:39.079222 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:53:39.079340 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:53:39.092608 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:39.093131 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:39.093398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:53:39.098422 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:39.099375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:53:39.099487 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:53:39.102050 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:53:39.102164 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:53:39.104301 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:53:39.106113 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:53:39.110868 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:39.111030 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:53:39.113416 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:53:39.113704 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:53:39.113821 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:53:39.116607 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:53:39.116692 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:53:39.118194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:53:39.118313 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:53:39.119894 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:53:39.119993 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:53:39.121228 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:53:39.123367 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:53:39.123521 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:39.137168 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:53:39.138831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:53:39.139042 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:39.141120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:53:39.141270 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:53:39.148021 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:53:39.148171 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:53:39.156770 ignition[1092]: INFO : Ignition 2.22.0 Sep 16 04:53:39.156770 ignition[1092]: INFO : Stage: umount Sep 16 04:53:39.156770 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:53:39.156770 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 16 04:53:39.156770 ignition[1092]: INFO : umount: umount passed Sep 16 04:53:39.156770 ignition[1092]: INFO : Ignition finished successfully Sep 16 04:53:39.160123 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:53:39.160257 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:53:39.162526 systemd[1]: Stopped target network.target - Network. Sep 16 04:53:39.163689 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:53:39.163758 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:53:39.164661 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:53:39.164711 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:53:39.165127 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:53:39.165177 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:53:39.165432 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:53:39.165473 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:53:39.165878 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:53:39.171303 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:53:39.175370 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:53:39.181682 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:53:39.181827 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:53:39.186585 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:53:39.186944 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:53:39.187094 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:53:39.191199 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:53:39.192220 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:53:39.194467 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:53:39.194534 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:39.195702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:53:39.197379 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:53:39.197432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:53:39.197880 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:53:39.197935 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:39.203114 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:53:39.203166 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:39.203257 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:53:39.203300 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:39.207249 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:39.209061 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:53:39.209125 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:39.221027 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:53:39.222161 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:53:39.224252 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:53:39.224428 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:39.228036 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:53:39.228101 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:39.228354 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:53:39.228395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:39.228639 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:53:39.228685 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:53:39.229486 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:53:39.229534 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:53:39.236914 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:53:39.236973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:53:39.238940 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:53:39.241016 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:53:39.241073 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:39.245323 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:53:39.245370 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:39.250262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:39.250313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:39.254679 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:53:39.254745 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:53:39.254826 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:53:39.268076 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:53:39.268197 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:53:39.755630 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:53:39.755819 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:53:39.756271 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:53:39.758361 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:53:39.758427 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:53:39.759516 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:53:39.787710 systemd[1]: Switching root. Sep 16 04:53:39.829699 systemd-journald[221]: Journal stopped Sep 16 04:53:41.799079 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Sep 16 04:53:41.799168 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:53:41.799187 kernel: SELinux: policy capability open_perms=1 Sep 16 04:53:41.799203 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:53:41.799217 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:53:41.799236 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:53:41.799251 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:53:41.799265 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:53:41.799282 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:53:41.799310 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:53:41.799325 kernel: audit: type=1403 audit(1757998420.746:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:53:41.799341 systemd[1]: Successfully loaded SELinux policy in 67.335ms. Sep 16 04:53:41.799366 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.794ms. Sep 16 04:53:41.799382 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:53:41.799401 systemd[1]: Detected virtualization kvm. Sep 16 04:53:41.799416 systemd[1]: Detected architecture x86-64. Sep 16 04:53:41.799431 systemd[1]: Detected first boot. Sep 16 04:53:41.799446 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:53:41.799461 zram_generator::config[1138]: No configuration found. Sep 16 04:53:41.799478 kernel: Guest personality initialized and is inactive Sep 16 04:53:41.799494 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 04:53:41.799509 kernel: Initialized host personality Sep 16 04:53:41.799527 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:53:41.799542 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:53:41.799559 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:53:41.799572 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:53:41.799590 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:53:41.799602 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:53:41.799614 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:53:41.799632 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:53:41.799649 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:53:41.799669 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:53:41.799685 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:53:41.799700 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:53:41.799727 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:53:41.799748 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:53:41.799763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:53:41.799779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:53:41.799855 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:53:41.799874 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:53:41.799889 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:53:41.799904 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:53:41.799918 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:53:41.799932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:53:41.799952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:53:41.799966 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:53:41.799981 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:53:41.799998 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:53:41.800012 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:53:41.800027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:53:41.800043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:53:41.800058 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:53:41.800073 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:53:41.800089 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:53:41.800104 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:53:41.800119 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:53:41.800138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:53:41.800153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:53:41.800169 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:53:41.800184 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:53:41.800199 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:53:41.800215 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:53:41.800230 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:53:41.800246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:41.800260 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:53:41.800279 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:53:41.800293 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:53:41.800309 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:53:41.800323 systemd[1]: Reached target machines.target - Containers. Sep 16 04:53:41.800339 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:53:41.800355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:41.800374 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:53:41.800391 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:53:41.800407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:41.800427 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:53:41.800447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:41.800463 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:53:41.800478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:53:41.800494 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:53:41.800509 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:53:41.800523 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:53:41.800537 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:53:41.800556 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:53:41.800572 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:41.800587 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:53:41.800630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:53:41.800647 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:53:41.800664 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:53:41.800679 kernel: fuse: init (API version 7.41) Sep 16 04:53:41.800693 kernel: ACPI: bus type drm_connector registered Sep 16 04:53:41.800719 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:53:41.800741 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:53:41.800765 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:53:41.800782 kernel: loop: module loaded Sep 16 04:53:41.800818 systemd[1]: Stopped verity-setup.service. Sep 16 04:53:41.800836 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:41.800852 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:53:41.800867 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:53:41.800881 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:53:41.800896 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:53:41.800911 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:53:41.800931 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:53:41.801000 systemd-journald[1216]: Collecting audit messages is disabled. Sep 16 04:53:41.801035 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:53:41.801052 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:53:41.801068 systemd-journald[1216]: Journal started Sep 16 04:53:41.801098 systemd-journald[1216]: Runtime Journal (/run/log/journal/cc44cbd52dc54668b181da8d6acb6961) is 6M, max 48.2M, 42.2M free. Sep 16 04:53:41.510773 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:53:41.537067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 16 04:53:41.537616 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:53:41.804836 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:53:41.806084 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:53:41.806321 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:53:41.807915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:41.808169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:41.809613 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:53:41.809871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:53:41.811279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:41.811502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:41.813167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:53:41.813412 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:53:41.814822 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:53:41.815045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:53:41.816489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:53:41.818039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:53:41.819610 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:53:41.821182 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:53:41.836921 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:53:41.839814 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:53:41.842048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:53:41.843323 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:53:41.843354 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:53:41.845302 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:53:41.865911 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:53:41.867087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:41.868677 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:53:41.871950 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:53:41.873267 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:53:41.874518 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:53:41.876025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:53:41.877113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:53:41.881910 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:53:41.897099 systemd-journald[1216]: Time spent on flushing to /var/log/journal/cc44cbd52dc54668b181da8d6acb6961 is 22.416ms for 1043 entries. Sep 16 04:53:41.897099 systemd-journald[1216]: System Journal (/var/log/journal/cc44cbd52dc54668b181da8d6acb6961) is 8M, max 195.6M, 187.6M free. Sep 16 04:53:41.929320 systemd-journald[1216]: Received client request to flush runtime journal. Sep 16 04:53:41.929373 kernel: loop0: detected capacity change from 0 to 128016 Sep 16 04:53:41.885011 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:53:41.888065 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:53:41.889776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:53:41.892125 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:53:41.898550 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:53:41.900033 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:53:41.910423 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:53:41.914689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:53:41.932090 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:53:41.940434 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:53:41.943377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:53:41.947822 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:53:41.956422 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:53:41.968255 kernel: loop1: detected capacity change from 0 to 221472 Sep 16 04:53:41.973596 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 16 04:53:41.973614 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 16 04:53:41.979303 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:53:42.006816 kernel: loop2: detected capacity change from 0 to 110984 Sep 16 04:53:42.034898 kernel: loop3: detected capacity change from 0 to 128016 Sep 16 04:53:42.047854 kernel: loop4: detected capacity change from 0 to 221472 Sep 16 04:53:42.065132 kernel: loop5: detected capacity change from 0 to 110984 Sep 16 04:53:42.074949 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 16 04:53:42.075527 (sd-merge)[1280]: Merged extensions into '/usr'. Sep 16 04:53:42.081634 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:53:42.081655 systemd[1]: Reloading... Sep 16 04:53:42.184844 zram_generator::config[1306]: No configuration found. Sep 16 04:53:42.344930 ldconfig[1252]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:53:42.414540 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:53:42.415024 systemd[1]: Reloading finished in 332 ms. Sep 16 04:53:42.465659 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:53:42.467446 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:53:42.485184 systemd[1]: Starting ensure-sysext.service... Sep 16 04:53:42.583312 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:53:42.594941 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:53:42.594962 systemd[1]: Reloading... Sep 16 04:53:42.640116 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:53:42.640163 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:53:42.640542 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:53:42.640866 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:53:42.641976 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:53:42.642394 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 16 04:53:42.642498 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Sep 16 04:53:42.651252 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:53:42.651269 systemd-tmpfiles[1344]: Skipping /boot Sep 16 04:53:42.656821 zram_generator::config[1369]: No configuration found. Sep 16 04:53:42.667812 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:53:42.667830 systemd-tmpfiles[1344]: Skipping /boot Sep 16 04:53:42.856387 systemd[1]: Reloading finished in 261 ms. Sep 16 04:53:42.903890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:42.904081 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:42.905432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:42.908048 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:42.910277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:53:42.911522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:42.911740 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:42.911861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:42.914513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:42.914751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:42.914981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:42.915108 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:42.915245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:42.918779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:42.919040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:42.920859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:42.921099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:42.922692 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:53:42.923087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:53:42.929484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:42.929759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:53:42.931153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:53:42.936272 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:53:42.938283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:53:42.941511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:53:42.941817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:53:42.941937 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:53:42.942101 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:53:42.948962 systemd[1]: Finished ensure-sysext.service. Sep 16 04:53:42.950620 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:53:42.950920 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:53:42.952606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:53:42.952859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:53:42.954250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:53:42.954502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:53:42.956018 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:53:42.956238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:53:42.961131 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:53:42.961247 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:53:42.990409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:53:42.994867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:53:42.997808 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:53:43.008164 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:53:43.013178 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:53:43.014980 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 16 04:53:43.019932 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:53:43.034632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:53:43.049010 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:53:43.056566 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:53:43.078852 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:53:43.080443 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:53:43.092597 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:53:43.382482 augenrules[1458]: No rules Sep 16 04:53:43.385177 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:53:43.385487 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:53:43.598850 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 16 04:53:43.601015 systemd-resolved[1427]: Positive Trust Anchors: Sep 16 04:53:43.601030 systemd-resolved[1427]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:53:43.601060 systemd-resolved[1427]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:53:43.707525 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:53:43.709561 systemd-resolved[1427]: Defaulting to hostname 'linux'. Sep 16 04:53:43.711148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:53:43.712264 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:53:43.934363 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:53:43.937609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:53:43.940246 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:53:43.974432 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:53:43.994580 systemd-udevd[1467]: Using default interface naming scheme 'v255'. Sep 16 04:53:44.019806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:53:44.021359 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:53:44.022589 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:53:44.023950 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:53:44.025189 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:53:44.026524 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:53:44.027758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:53:44.029034 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:53:44.030264 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:53:44.030305 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:53:44.031209 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:53:44.033065 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:53:44.036937 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:53:44.041725 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:53:44.043150 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:53:44.044364 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:53:44.049213 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:53:44.050745 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:53:44.055114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:53:44.056942 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:53:44.064429 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:53:44.066186 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:53:44.067173 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:53:44.067202 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:53:44.071003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:53:44.074272 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:53:44.078023 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:53:44.082116 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:53:44.083145 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:53:44.108130 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:53:44.109656 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:53:44.115152 jq[1499]: false Sep 16 04:53:44.111252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:53:44.114090 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:53:44.122573 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:53:44.127906 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:53:44.129902 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:53:44.130415 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:53:44.131246 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:53:44.134542 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:53:44.137827 google_oslogin_nss_cache[1501]: oslogin_cache_refresh[1501]: Refreshing passwd entry cache Sep 16 04:53:44.137835 oslogin_cache_refresh[1501]: Refreshing passwd entry cache Sep 16 04:53:44.141294 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:53:44.142179 oslogin_cache_refresh[1501]: Failure getting users, quitting Sep 16 04:53:44.145855 google_oslogin_nss_cache[1501]: oslogin_cache_refresh[1501]: Failure getting users, quitting Sep 16 04:53:44.145855 google_oslogin_nss_cache[1501]: oslogin_cache_refresh[1501]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:53:44.145855 google_oslogin_nss_cache[1501]: oslogin_cache_refresh[1501]: Refreshing group entry cache Sep 16 04:53:44.145855 google_oslogin_nss_cache[1501]: oslogin_cache_refresh[1501]: Failure getting groups, quitting Sep 16 04:53:44.145855 google_oslogin_nss_cache[1501]: oslogin_cache_refresh[1501]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:53:44.144360 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:53:44.142195 oslogin_cache_refresh[1501]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:53:44.144621 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:53:44.142241 oslogin_cache_refresh[1501]: Refreshing group entry cache Sep 16 04:53:44.144989 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:53:44.142720 oslogin_cache_refresh[1501]: Failure getting groups, quitting Sep 16 04:53:44.145237 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:53:44.142731 oslogin_cache_refresh[1501]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:53:44.147499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:53:44.156149 extend-filesystems[1500]: Found /dev/vda6 Sep 16 04:53:44.154666 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:53:44.162095 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:53:44.162416 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:53:44.165812 extend-filesystems[1500]: Found /dev/vda9 Sep 16 04:53:44.171652 extend-filesystems[1500]: Checking size of /dev/vda9 Sep 16 04:53:44.178051 jq[1514]: true Sep 16 04:53:44.180015 tar[1519]: linux-amd64/helm Sep 16 04:53:44.181341 update_engine[1512]: I20250916 04:53:44.181255 1512 main.cc:92] Flatcar Update Engine starting Sep 16 04:53:44.189463 extend-filesystems[1500]: Resized partition /dev/vda9 Sep 16 04:53:44.197996 extend-filesystems[1541]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:53:44.205174 jq[1535]: true Sep 16 04:53:44.209589 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 16 04:53:44.222967 dbus-daemon[1497]: [system] SELinux support is enabled Sep 16 04:53:44.239014 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:53:44.242373 update_engine[1512]: I20250916 04:53:44.242228 1512 update_check_scheduler.cc:74] Next update check in 8m1s Sep 16 04:53:44.243397 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:53:44.243439 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:53:44.245746 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:53:44.245767 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:53:44.252480 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:53:44.255955 systemd-logind[1508]: New seat seat0. Sep 16 04:53:44.263415 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:53:44.275924 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 16 04:53:44.274365 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:53:46.210463 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 16 04:53:44.283248 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:53:44.356576 systemd-networkd[1494]: lo: Link UP Sep 16 04:53:44.356581 systemd-networkd[1494]: lo: Gained carrier Sep 16 04:53:44.360317 systemd-networkd[1494]: Enumeration completed Sep 16 04:53:44.360803 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:53:44.361686 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:44.361691 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:53:46.213837 extend-filesystems[1541]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 16 04:53:46.213837 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:53:46.213837 extend-filesystems[1541]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 16 04:53:44.363288 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:46.221193 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Sep 16 04:53:44.363331 systemd-networkd[1494]: eth0: Link UP Sep 16 04:53:44.363677 systemd[1]: Reached target network.target - Network. Sep 16 04:53:44.364276 systemd-networkd[1494]: eth0: Gained carrier Sep 16 04:53:44.364293 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:53:44.366405 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:53:44.369227 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:53:44.372860 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:53:44.382200 systemd-networkd[1494]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 16 04:53:46.227444 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:53:44.387292 systemd-timesyncd[1428]: Network configuration changed, trying to establish connection. Sep 16 04:53:46.182098 systemd-resolved[1427]: Clock change detected. Flushing caches. Sep 16 04:53:46.192633 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:53:46.209491 systemd-timesyncd[1428]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 16 04:53:46.209567 systemd-timesyncd[1428]: Initial clock synchronization to Tue 2025-09-16 04:53:46.181652 UTC. Sep 16 04:53:46.213257 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:53:46.213560 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:53:46.213892 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:53:46.228923 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:53:46.231641 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:53:46.253118 sshd_keygen[1540]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:53:46.269056 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 16 04:53:46.269439 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 16 04:53:46.269688 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 16 04:53:46.281905 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:53:46.292236 kernel: ACPI: button: Power Button [PWRF] Sep 16 04:53:46.359769 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:53:46.364610 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:53:46.377366 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:53:46.398296 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:53:46.399075 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:53:46.403625 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:53:46.468380 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:53:46.473307 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:53:46.479265 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:53:46.482195 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:53:46.489219 kernel: kvm_amd: TSC scaling supported Sep 16 04:53:46.489274 kernel: kvm_amd: Nested Virtualization enabled Sep 16 04:53:46.489289 kernel: kvm_amd: Nested Paging enabled Sep 16 04:53:46.489302 kernel: kvm_amd: LBR virtualization supported Sep 16 04:53:46.528055 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 16 04:53:46.528145 kernel: kvm_amd: Virtual GIF supported Sep 16 04:53:46.542809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 16 04:53:46.550298 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:53:46.558027 systemd-logind[1508]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:53:46.558986 systemd-logind[1508]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 04:53:46.571092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:46.578802 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:53:46.579130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:46.584613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:53:46.594568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:53:46.691061 kernel: EDAC MC: Ver: 3.0.0 Sep 16 04:53:46.708479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:53:46.712496 containerd[1572]: time="2025-09-16T04:53:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:53:46.713625 containerd[1572]: time="2025-09-16T04:53:46.713590552Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:53:46.727354 containerd[1572]: time="2025-09-16T04:53:46.727292418Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.287µs" Sep 16 04:53:46.727354 containerd[1572]: time="2025-09-16T04:53:46.727349555Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:53:46.727454 containerd[1572]: time="2025-09-16T04:53:46.727378339Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:53:46.727651 containerd[1572]: time="2025-09-16T04:53:46.727625963Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:53:46.727651 containerd[1572]: time="2025-09-16T04:53:46.727647474Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:53:46.727704 containerd[1572]: time="2025-09-16T04:53:46.727675967Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:53:46.727793 containerd[1572]: time="2025-09-16T04:53:46.727771226Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:53:46.727793 containerd[1572]: time="2025-09-16T04:53:46.727787536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728200 containerd[1572]: time="2025-09-16T04:53:46.728168902Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728200 containerd[1572]: time="2025-09-16T04:53:46.728188398Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728248 containerd[1572]: time="2025-09-16T04:53:46.728199699Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728248 containerd[1572]: time="2025-09-16T04:53:46.728209388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728352 containerd[1572]: time="2025-09-16T04:53:46.728331276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728632 containerd[1572]: time="2025-09-16T04:53:46.728597666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728668 containerd[1572]: time="2025-09-16T04:53:46.728636729Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:53:46.728668 containerd[1572]: time="2025-09-16T04:53:46.728648531Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:53:46.728726 containerd[1572]: time="2025-09-16T04:53:46.728707582Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:53:46.729013 containerd[1572]: time="2025-09-16T04:53:46.728989571Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:53:46.729098 containerd[1572]: time="2025-09-16T04:53:46.729080491Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:53:46.734665 containerd[1572]: time="2025-09-16T04:53:46.734603549Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:53:46.734719 containerd[1572]: time="2025-09-16T04:53:46.734682387Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:53:46.734719 containerd[1572]: time="2025-09-16T04:53:46.734713024Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:53:46.734768 containerd[1572]: time="2025-09-16T04:53:46.734725968Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:53:46.734768 containerd[1572]: time="2025-09-16T04:53:46.734739364Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:53:46.734768 containerd[1572]: time="2025-09-16T04:53:46.734749783Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:53:46.734768 containerd[1572]: time="2025-09-16T04:53:46.734768328Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:53:46.734849 containerd[1572]: time="2025-09-16T04:53:46.734783957Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:53:46.734849 containerd[1572]: time="2025-09-16T04:53:46.734799216Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:53:46.734849 containerd[1572]: time="2025-09-16T04:53:46.734813192Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:53:46.734849 containerd[1572]: time="2025-09-16T04:53:46.734826317Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:53:46.734849 containerd[1572]: time="2025-09-16T04:53:46.734841986Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:53:46.735072 containerd[1572]: time="2025-09-16T04:53:46.735031972Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:53:46.735112 containerd[1572]: time="2025-09-16T04:53:46.735091915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:53:46.735134 containerd[1572]: time="2025-09-16T04:53:46.735118925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:53:46.735155 containerd[1572]: time="2025-09-16T04:53:46.735135236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:53:46.735155 containerd[1572]: time="2025-09-16T04:53:46.735149723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:53:46.735192 containerd[1572]: time="2025-09-16T04:53:46.735163729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:53:46.735192 containerd[1572]: time="2025-09-16T04:53:46.735179309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:53:46.735240 containerd[1572]: time="2025-09-16T04:53:46.735196882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:53:46.735240 containerd[1572]: time="2025-09-16T04:53:46.735211178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:53:46.735240 containerd[1572]: time="2025-09-16T04:53:46.735226176Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:53:46.735301 containerd[1572]: time="2025-09-16T04:53:46.735240043Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:53:46.735374 containerd[1572]: time="2025-09-16T04:53:46.735342565Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:53:46.735405 containerd[1572]: time="2025-09-16T04:53:46.735377110Z" level=info msg="Start snapshots syncer" Sep 16 04:53:46.735443 containerd[1572]: time="2025-09-16T04:53:46.735416994Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:53:46.735798 containerd[1572]: time="2025-09-16T04:53:46.735750570Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:53:46.735996 containerd[1572]: time="2025-09-16T04:53:46.735823156Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:53:46.737451 containerd[1572]: time="2025-09-16T04:53:46.737409160Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:53:46.737824 containerd[1572]: time="2025-09-16T04:53:46.737797358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:53:46.737851 containerd[1572]: time="2025-09-16T04:53:46.737828847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:53:46.737851 containerd[1572]: time="2025-09-16T04:53:46.737843285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:53:46.737899 containerd[1572]: time="2025-09-16T04:53:46.737872259Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:53:46.737899 containerd[1572]: time="2025-09-16T04:53:46.737888870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:53:46.737943 containerd[1572]: time="2025-09-16T04:53:46.737900742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:53:46.737943 containerd[1572]: time="2025-09-16T04:53:46.737915560Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:53:46.738081 containerd[1572]: time="2025-09-16T04:53:46.738055973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:53:46.738081 containerd[1572]: time="2025-09-16T04:53:46.738071683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:53:46.738122 containerd[1572]: time="2025-09-16T04:53:46.738082343Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:53:46.738122 containerd[1572]: time="2025-09-16T04:53:46.738115265Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:53:46.738159 containerd[1572]: time="2025-09-16T04:53:46.738131906Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:53:46.738159 containerd[1572]: time="2025-09-16T04:53:46.738140933Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:53:46.738159 containerd[1572]: time="2025-09-16T04:53:46.738149459Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:53:46.738159 containerd[1572]: time="2025-09-16T04:53:46.738156762Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:53:46.738238 containerd[1572]: time="2025-09-16T04:53:46.738166030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:53:46.738238 containerd[1572]: time="2025-09-16T04:53:46.738230721Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:53:46.738275 containerd[1572]: time="2025-09-16T04:53:46.738255528Z" level=info msg="runtime interface created" Sep 16 04:53:46.738275 containerd[1572]: time="2025-09-16T04:53:46.738263022Z" level=info msg="created NRI interface" Sep 16 04:53:46.738311 containerd[1572]: time="2025-09-16T04:53:46.738287167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:53:46.738311 containerd[1572]: time="2025-09-16T04:53:46.738307505Z" level=info msg="Connect containerd service" Sep 16 04:53:46.738354 containerd[1572]: time="2025-09-16T04:53:46.738341048Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:53:46.739385 containerd[1572]: time="2025-09-16T04:53:46.739358316Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:53:46.759472 tar[1519]: linux-amd64/LICENSE Sep 16 04:53:46.759472 tar[1519]: linux-amd64/README.md Sep 16 04:53:46.778196 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:53:46.850834 containerd[1572]: time="2025-09-16T04:53:46.850778198Z" level=info msg="Start subscribing containerd event" Sep 16 04:53:46.851027 containerd[1572]: time="2025-09-16T04:53:46.850889757Z" level=info msg="Start recovering state" Sep 16 04:53:46.851063 containerd[1572]: time="2025-09-16T04:53:46.851046391Z" level=info msg="Start event monitor" Sep 16 04:53:46.851111 containerd[1572]: time="2025-09-16T04:53:46.851052673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:53:46.851111 containerd[1572]: time="2025-09-16T04:53:46.851064174Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:53:46.851152 containerd[1572]: time="2025-09-16T04:53:46.851131651Z" level=info msg="Start streaming server" Sep 16 04:53:46.851152 containerd[1572]: time="2025-09-16T04:53:46.851108187Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:53:46.851188 containerd[1572]: time="2025-09-16T04:53:46.851154383Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:53:46.851188 containerd[1572]: time="2025-09-16T04:53:46.851170433Z" level=info msg="runtime interface starting up..." Sep 16 04:53:46.851188 containerd[1572]: time="2025-09-16T04:53:46.851180612Z" level=info msg="starting plugins..." Sep 16 04:53:46.851245 containerd[1572]: time="2025-09-16T04:53:46.851220076Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:53:46.851856 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:53:46.855906 containerd[1572]: time="2025-09-16T04:53:46.855878593Z" level=info msg="containerd successfully booted in 0.143868s" Sep 16 04:53:47.586110 systemd-networkd[1494]: eth0: Gained IPv6LL Sep 16 04:53:47.589543 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:53:47.591566 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:53:47.594515 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 16 04:53:47.597346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:53:47.626401 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:53:47.645387 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 16 04:53:47.645682 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 16 04:53:47.647282 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:53:47.654514 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:53:49.011803 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:53:49.014479 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:50474.service - OpenSSH per-connection server daemon (10.0.0.1:50474). Sep 16 04:53:49.154349 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 50474 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:53:49.156285 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:49.163460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:53:49.165768 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:53:49.173913 systemd-logind[1508]: New session 1 of user core. Sep 16 04:53:49.189518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:53:49.205012 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:53:49.221767 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:53:49.222065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:53:49.223847 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:53:49.226223 systemd-logind[1508]: New session c1 of user core. Sep 16 04:53:49.226837 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:53:49.438070 systemd[1681]: Queued start job for default target default.target. Sep 16 04:53:49.452696 systemd[1681]: Created slice app.slice - User Application Slice. Sep 16 04:53:49.452742 systemd[1681]: Reached target paths.target - Paths. Sep 16 04:53:49.452828 systemd[1681]: Reached target timers.target - Timers. Sep 16 04:53:49.455473 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:53:49.476990 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:53:49.477193 systemd[1681]: Reached target sockets.target - Sockets. Sep 16 04:53:49.477273 systemd[1681]: Reached target basic.target - Basic System. Sep 16 04:53:49.477445 systemd[1681]: Reached target default.target - Main User Target. Sep 16 04:53:49.477499 systemd[1681]: Startup finished in 243ms. Sep 16 04:53:49.478694 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:53:49.491086 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:53:49.492693 systemd[1]: Startup finished in 3.311s (kernel) + 7.054s (initrd) + 7.017s (userspace) = 17.383s. Sep 16 04:53:49.557752 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:50486.service - OpenSSH per-connection server daemon (10.0.0.1:50486). Sep 16 04:53:49.621151 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 50486 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:53:49.624467 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:49.633258 systemd-logind[1508]: New session 2 of user core. Sep 16 04:53:49.708218 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:53:49.763223 sshd[1709]: Connection closed by 10.0.0.1 port 50486 Sep 16 04:53:49.763622 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:49.777745 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:50486.service: Deactivated successfully. Sep 16 04:53:49.779802 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:53:49.780702 systemd-logind[1508]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:53:49.784042 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:50500.service - OpenSSH per-connection server daemon (10.0.0.1:50500). Sep 16 04:53:49.784696 systemd-logind[1508]: Removed session 2. Sep 16 04:53:49.875553 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 50500 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:53:49.876957 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:49.881926 systemd-logind[1508]: New session 3 of user core. Sep 16 04:53:49.890989 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:53:49.940617 sshd[1719]: Connection closed by 10.0.0.1 port 50500 Sep 16 04:53:49.942820 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:49.947837 kubelet[1686]: E0916 04:53:49.947752 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:53:49.956041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:53:49.956272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:53:49.956684 systemd[1]: kubelet.service: Consumed 2.113s CPU time, 265.5M memory peak. Sep 16 04:53:49.957231 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:50500.service: Deactivated successfully. Sep 16 04:53:49.959328 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:53:49.960262 systemd-logind[1508]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:53:49.964997 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:47092.service - OpenSSH per-connection server daemon (10.0.0.1:47092). Sep 16 04:53:49.965606 systemd-logind[1508]: Removed session 3. Sep 16 04:53:50.022491 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 47092 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:53:50.024377 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:50.028919 systemd-logind[1508]: New session 4 of user core. Sep 16 04:53:50.045002 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:53:50.099714 sshd[1729]: Connection closed by 10.0.0.1 port 47092 Sep 16 04:53:50.100165 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:50.114845 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:47092.service: Deactivated successfully. Sep 16 04:53:50.116977 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:53:50.117699 systemd-logind[1508]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:53:50.120555 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:47104.service - OpenSSH per-connection server daemon (10.0.0.1:47104). Sep 16 04:53:50.121395 systemd-logind[1508]: Removed session 4. Sep 16 04:53:50.179154 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 47104 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:53:50.180663 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:50.184847 systemd-logind[1508]: New session 5 of user core. Sep 16 04:53:50.195007 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:53:50.253538 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:53:50.253873 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:50.269196 sudo[1739]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:50.270558 sshd[1738]: Connection closed by 10.0.0.1 port 47104 Sep 16 04:53:50.271000 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:50.286518 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:47104.service: Deactivated successfully. Sep 16 04:53:50.288245 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:53:50.288979 systemd-logind[1508]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:53:50.291794 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:47112.service - OpenSSH per-connection server daemon (10.0.0.1:47112). Sep 16 04:53:50.292412 systemd-logind[1508]: Removed session 5. Sep 16 04:53:50.344176 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 47112 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:53:50.345520 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:50.349784 systemd-logind[1508]: New session 6 of user core. Sep 16 04:53:50.359980 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:53:50.412763 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:53:50.413087 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:50.655352 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:50.663590 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:53:50.663947 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:50.674162 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:53:50.715913 augenrules[1772]: No rules Sep 16 04:53:50.717703 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:53:50.718012 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:53:50.719100 sudo[1749]: pam_unix(sudo:session): session closed for user root Sep 16 04:53:50.720668 sshd[1748]: Connection closed by 10.0.0.1 port 47112 Sep 16 04:53:50.721021 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 16 04:53:50.733375 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:47112.service: Deactivated successfully. Sep 16 04:53:50.735088 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:53:50.735965 systemd-logind[1508]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:53:50.738613 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:47120.service - OpenSSH per-connection server daemon (10.0.0.1:47120). Sep 16 04:53:50.739459 systemd-logind[1508]: Removed session 6. Sep 16 04:53:50.790414 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 47120 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:53:50.792101 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:53:50.796443 systemd-logind[1508]: New session 7 of user core. Sep 16 04:53:50.805987 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:53:50.863700 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:53:50.864111 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:53:51.416892 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:53:51.443398 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:53:51.837334 dockerd[1806]: time="2025-09-16T04:53:51.837258568Z" level=info msg="Starting up" Sep 16 04:53:51.838227 dockerd[1806]: time="2025-09-16T04:53:51.838196087Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:53:51.882156 dockerd[1806]: time="2025-09-16T04:53:51.882100533Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:53:52.699693 dockerd[1806]: time="2025-09-16T04:53:52.699613001Z" level=info msg="Loading containers: start." Sep 16 04:53:52.712895 kernel: Initializing XFRM netlink socket Sep 16 04:53:53.163403 systemd-networkd[1494]: docker0: Link UP Sep 16 04:53:53.385218 dockerd[1806]: time="2025-09-16T04:53:53.385115092Z" level=info msg="Loading containers: done." Sep 16 04:53:53.406567 dockerd[1806]: time="2025-09-16T04:53:53.406477564Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:53:53.406790 dockerd[1806]: time="2025-09-16T04:53:53.406603019Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:53:53.406790 dockerd[1806]: time="2025-09-16T04:53:53.406722904Z" level=info msg="Initializing buildkit" Sep 16 04:53:53.455965 dockerd[1806]: time="2025-09-16T04:53:53.455805321Z" level=info msg="Completed buildkit initialization" Sep 16 04:53:53.462190 dockerd[1806]: time="2025-09-16T04:53:53.462111678Z" level=info msg="Daemon has completed initialization" Sep 16 04:53:53.462332 dockerd[1806]: time="2025-09-16T04:53:53.462268833Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:53:53.462482 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:53:54.309784 containerd[1572]: time="2025-09-16T04:53:54.309716289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 16 04:53:55.632686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522036197.mount: Deactivated successfully. Sep 16 04:53:56.963155 containerd[1572]: time="2025-09-16T04:53:56.963080103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:56.963849 containerd[1572]: time="2025-09-16T04:53:56.963809612Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 16 04:53:56.965032 containerd[1572]: time="2025-09-16T04:53:56.965001627Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:56.967792 containerd[1572]: time="2025-09-16T04:53:56.967742487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:56.968728 containerd[1572]: time="2025-09-16T04:53:56.968691948Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.658912991s" Sep 16 04:53:56.968728 containerd[1572]: time="2025-09-16T04:53:56.968727985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 16 04:53:56.969353 containerd[1572]: time="2025-09-16T04:53:56.969322761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 16 04:53:58.428949 containerd[1572]: time="2025-09-16T04:53:58.428834446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:58.429708 containerd[1572]: time="2025-09-16T04:53:58.429653472Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 16 04:53:58.431150 containerd[1572]: time="2025-09-16T04:53:58.431089675Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:58.434136 containerd[1572]: time="2025-09-16T04:53:58.434088519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:53:58.435165 containerd[1572]: time="2025-09-16T04:53:58.435065892Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.46570465s" Sep 16 04:53:58.435165 containerd[1572]: time="2025-09-16T04:53:58.435167443Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 16 04:53:58.435819 containerd[1572]: time="2025-09-16T04:53:58.435790671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 16 04:54:00.007778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:54:00.009906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:00.520008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:00.538212 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:54:00.710993 kubelet[2093]: E0916 04:54:00.710872 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:54:00.717755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:54:00.718020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:54:00.718539 systemd[1]: kubelet.service: Consumed 316ms CPU time, 110.6M memory peak. Sep 16 04:54:01.207311 containerd[1572]: time="2025-09-16T04:54:01.207200845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:01.209205 containerd[1572]: time="2025-09-16T04:54:01.209144480Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 16 04:54:01.210409 containerd[1572]: time="2025-09-16T04:54:01.210369006Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:01.215441 containerd[1572]: time="2025-09-16T04:54:01.215399320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:01.216926 containerd[1572]: time="2025-09-16T04:54:01.216798043Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.780893318s" Sep 16 04:54:01.216998 containerd[1572]: time="2025-09-16T04:54:01.216932335Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 16 04:54:01.217741 containerd[1572]: time="2025-09-16T04:54:01.217696488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 16 04:54:03.210967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357789954.mount: Deactivated successfully. Sep 16 04:54:03.798115 containerd[1572]: time="2025-09-16T04:54:03.798050262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:03.798903 containerd[1572]: time="2025-09-16T04:54:03.798880299Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 16 04:54:03.800381 containerd[1572]: time="2025-09-16T04:54:03.800325068Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:03.802259 containerd[1572]: time="2025-09-16T04:54:03.802227175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:03.802723 containerd[1572]: time="2025-09-16T04:54:03.802669044Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.584932331s" Sep 16 04:54:03.802791 containerd[1572]: time="2025-09-16T04:54:03.802727033Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 16 04:54:03.803349 containerd[1572]: time="2025-09-16T04:54:03.803319484Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:54:04.864633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123279630.mount: Deactivated successfully. Sep 16 04:54:06.409609 containerd[1572]: time="2025-09-16T04:54:06.409485435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:06.411348 containerd[1572]: time="2025-09-16T04:54:06.411306240Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 16 04:54:06.413981 containerd[1572]: time="2025-09-16T04:54:06.413940480Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:06.416578 containerd[1572]: time="2025-09-16T04:54:06.416534114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:06.417640 containerd[1572]: time="2025-09-16T04:54:06.417580336Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.614225456s" Sep 16 04:54:06.417640 containerd[1572]: time="2025-09-16T04:54:06.417622145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:54:06.418253 containerd[1572]: time="2025-09-16T04:54:06.418173328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:54:07.242580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121579232.mount: Deactivated successfully. Sep 16 04:54:07.251653 containerd[1572]: time="2025-09-16T04:54:07.251589732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:54:07.254444 containerd[1572]: time="2025-09-16T04:54:07.254395143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 16 04:54:07.257744 containerd[1572]: time="2025-09-16T04:54:07.257671277Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:54:07.261048 containerd[1572]: time="2025-09-16T04:54:07.261004568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:54:07.261825 containerd[1572]: time="2025-09-16T04:54:07.261757480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 843.558202ms" Sep 16 04:54:07.261825 containerd[1572]: time="2025-09-16T04:54:07.261797314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:54:07.262482 containerd[1572]: time="2025-09-16T04:54:07.262381340Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 16 04:54:08.580405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221515206.mount: Deactivated successfully. Sep 16 04:54:10.757994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:54:10.760422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:11.054087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:11.075248 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:54:11.579433 kubelet[2231]: E0916 04:54:11.579299 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:54:11.583730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:54:11.583947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:54:11.584379 systemd[1]: kubelet.service: Consumed 236ms CPU time, 111.2M memory peak. Sep 16 04:54:11.877607 containerd[1572]: time="2025-09-16T04:54:11.877455862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:11.878280 containerd[1572]: time="2025-09-16T04:54:11.878227739Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 16 04:54:11.879649 containerd[1572]: time="2025-09-16T04:54:11.879603680Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:11.882851 containerd[1572]: time="2025-09-16T04:54:11.882815713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:11.883989 containerd[1572]: time="2025-09-16T04:54:11.883930123Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.62146181s" Sep 16 04:54:11.883989 containerd[1572]: time="2025-09-16T04:54:11.883978324Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 16 04:54:14.324074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:14.324245 systemd[1]: kubelet.service: Consumed 236ms CPU time, 111.2M memory peak. Sep 16 04:54:14.326556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:14.354407 systemd[1]: Reload requested from client PID 2271 ('systemctl') (unit session-7.scope)... Sep 16 04:54:14.354434 systemd[1]: Reloading... Sep 16 04:54:14.471959 zram_generator::config[2322]: No configuration found. Sep 16 04:54:15.180084 systemd[1]: Reloading finished in 825 ms. Sep 16 04:54:15.248815 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:54:15.248952 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:54:15.249344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:15.249410 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.3M memory peak. Sep 16 04:54:15.251509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:15.448498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:15.460208 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:54:15.507991 kubelet[2361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:15.507991 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:54:15.507991 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:15.508424 kubelet[2361]: I0916 04:54:15.508063 2361 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:54:16.559746 kubelet[2361]: I0916 04:54:16.559634 2361 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:54:16.559746 kubelet[2361]: I0916 04:54:16.559719 2361 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:54:16.572106 kubelet[2361]: I0916 04:54:16.560847 2361 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:54:16.747992 kubelet[2361]: E0916 04:54:16.747927 2361 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:16.752295 kubelet[2361]: I0916 04:54:16.752135 2361 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:54:16.760118 kubelet[2361]: I0916 04:54:16.760071 2361 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:54:16.767296 kubelet[2361]: I0916 04:54:16.767237 2361 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:54:16.767526 kubelet[2361]: I0916 04:54:16.767494 2361 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:54:16.767765 kubelet[2361]: I0916 04:54:16.767692 2361 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:54:16.768052 kubelet[2361]: I0916 04:54:16.767752 2361 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:54:16.768226 kubelet[2361]: I0916 04:54:16.768067 2361 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:54:16.768226 kubelet[2361]: I0916 04:54:16.768082 2361 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:54:16.768312 kubelet[2361]: I0916 04:54:16.768286 2361 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:16.795635 kubelet[2361]: I0916 04:54:16.795564 2361 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:54:16.795635 kubelet[2361]: I0916 04:54:16.795622 2361 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:54:16.795770 kubelet[2361]: I0916 04:54:16.795675 2361 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:54:16.795770 kubelet[2361]: I0916 04:54:16.795722 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:54:16.799242 kubelet[2361]: W0916 04:54:16.799132 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Sep 16 04:54:16.799242 kubelet[2361]: E0916 04:54:16.799222 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:16.799242 kubelet[2361]: W0916 04:54:16.799241 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Sep 16 04:54:16.799479 kubelet[2361]: E0916 04:54:16.799290 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:16.802884 kubelet[2361]: I0916 04:54:16.801754 2361 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:54:16.802884 kubelet[2361]: I0916 04:54:16.802508 2361 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:54:16.802884 kubelet[2361]: W0916 04:54:16.802655 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:54:16.806721 kubelet[2361]: I0916 04:54:16.806668 2361 server.go:1274] "Started kubelet" Sep 16 04:54:16.808888 kubelet[2361]: I0916 04:54:16.806963 2361 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:54:16.808888 kubelet[2361]: I0916 04:54:16.807036 2361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:54:16.808888 kubelet[2361]: I0916 04:54:16.807554 2361 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:54:16.808888 kubelet[2361]: I0916 04:54:16.808044 2361 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:54:16.811414 kubelet[2361]: I0916 04:54:16.811341 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:54:16.814889 kubelet[2361]: I0916 04:54:16.813968 2361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:54:16.815926 kubelet[2361]: E0916 04:54:16.815699 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:54:16.815926 kubelet[2361]: I0916 04:54:16.815754 2361 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:54:16.816033 kubelet[2361]: I0916 04:54:16.816017 2361 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:54:16.816092 kubelet[2361]: I0916 04:54:16.816081 2361 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:54:16.816431 kubelet[2361]: W0916 04:54:16.816373 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Sep 16 04:54:16.816431 kubelet[2361]: E0916 04:54:16.816422 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:16.816854 kubelet[2361]: E0916 04:54:16.816795 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" Sep 16 04:54:16.817929 kubelet[2361]: I0916 04:54:16.817577 2361 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:54:16.817929 kubelet[2361]: E0916 04:54:16.815319 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1865aa483da4d92c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-16 04:54:16.806627628 +0000 UTC m=+1.341993007,LastTimestamp:2025-09-16 04:54:16.806627628 +0000 UTC m=+1.341993007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 16 04:54:16.817929 kubelet[2361]: I0916 04:54:16.817658 2361 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:54:16.818432 kubelet[2361]: E0916 04:54:16.818401 2361 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:54:16.819013 kubelet[2361]: I0916 04:54:16.818976 2361 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:54:16.831411 kubelet[2361]: I0916 04:54:16.831231 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:54:16.832572 kubelet[2361]: I0916 04:54:16.832536 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:54:16.832616 kubelet[2361]: I0916 04:54:16.832574 2361 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:54:16.832649 kubelet[2361]: I0916 04:54:16.832621 2361 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:54:16.832693 kubelet[2361]: E0916 04:54:16.832672 2361 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:54:16.836992 kubelet[2361]: I0916 04:54:16.836462 2361 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:54:16.836992 kubelet[2361]: I0916 04:54:16.836492 2361 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:54:16.836992 kubelet[2361]: I0916 04:54:16.836523 2361 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:16.839101 kubelet[2361]: W0916 04:54:16.839040 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Sep 16 04:54:16.839162 kubelet[2361]: E0916 04:54:16.839104 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:16.916658 kubelet[2361]: E0916 04:54:16.916564 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:54:16.933016 kubelet[2361]: E0916 04:54:16.932939 2361 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:54:17.017373 kubelet[2361]: E0916 04:54:17.017305 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:54:17.017760 kubelet[2361]: E0916 04:54:17.017696 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" Sep 16 04:54:17.118259 kubelet[2361]: E0916 04:54:17.118086 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:54:17.133401 kubelet[2361]: E0916 04:54:17.133327 2361 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 16 04:54:17.203058 kubelet[2361]: I0916 04:54:17.202989 2361 policy_none.go:49] "None policy: Start" Sep 16 04:54:17.203847 kubelet[2361]: I0916 04:54:17.203825 2361 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:54:17.203934 kubelet[2361]: I0916 04:54:17.203852 2361 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:54:17.218233 kubelet[2361]: E0916 04:54:17.218200 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:54:17.318502 kubelet[2361]: E0916 04:54:17.318434 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:54:17.399313 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:54:17.418424 kubelet[2361]: E0916 04:54:17.418353 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" Sep 16 04:54:17.419400 kubelet[2361]: E0916 04:54:17.419334 2361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 16 04:54:17.423016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:54:17.426428 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:54:17.446838 kubelet[2361]: I0916 04:54:17.446813 2361 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:54:17.447092 kubelet[2361]: I0916 04:54:17.447065 2361 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:54:17.447140 kubelet[2361]: I0916 04:54:17.447087 2361 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:54:17.447328 kubelet[2361]: I0916 04:54:17.447303 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:54:17.448343 kubelet[2361]: E0916 04:54:17.448313 2361 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 16 04:54:17.542108 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 16 04:54:17.550246 kubelet[2361]: I0916 04:54:17.550176 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:54:17.550575 kubelet[2361]: E0916 04:54:17.550549 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Sep 16 04:54:17.555516 systemd[1]: Created slice kubepods-burstable-podd8c0469a896bfd9c0c49e01a39d8619c.slice - libcontainer container kubepods-burstable-podd8c0469a896bfd9c0c49e01a39d8619c.slice. Sep 16 04:54:17.559338 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 16 04:54:17.620502 kubelet[2361]: I0916 04:54:17.620414 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8c0469a896bfd9c0c49e01a39d8619c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d8c0469a896bfd9c0c49e01a39d8619c\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:17.620502 kubelet[2361]: I0916 04:54:17.620478 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:17.620502 kubelet[2361]: I0916 04:54:17.620501 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:17.621191 kubelet[2361]: I0916 04:54:17.620527 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:17.621191 kubelet[2361]: I0916 04:54:17.620574 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8c0469a896bfd9c0c49e01a39d8619c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8c0469a896bfd9c0c49e01a39d8619c\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:17.621191 kubelet[2361]: I0916 04:54:17.620651 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8c0469a896bfd9c0c49e01a39d8619c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8c0469a896bfd9c0c49e01a39d8619c\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:17.621191 kubelet[2361]: I0916 04:54:17.620685 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:17.621191 kubelet[2361]: I0916 04:54:17.620733 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:17.621326 kubelet[2361]: I0916 04:54:17.620779 2361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:54:17.680317 kubelet[2361]: W0916 04:54:17.680122 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Sep 16 04:54:17.680317 kubelet[2361]: E0916 04:54:17.680215 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:17.752566 kubelet[2361]: I0916 04:54:17.752522 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:54:17.753094 kubelet[2361]: E0916 04:54:17.753039 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Sep 16 04:54:17.853654 containerd[1572]: time="2025-09-16T04:54:17.853589085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:17.859236 containerd[1572]: time="2025-09-16T04:54:17.859199276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d8c0469a896bfd9c0c49e01a39d8619c,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:17.862196 containerd[1572]: time="2025-09-16T04:54:17.862135031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:17.896898 containerd[1572]: time="2025-09-16T04:54:17.896015640Z" level=info msg="connecting to shim c484fefaac7021981d6d7be732a6a39f9bd1dc284e423f3047519e92b1b14049" address="unix:///run/containerd/s/b92f9f9fcc3659ea2e48e615308d3a8fef4fe0e1fcd5ba7d405603fe7a741c30" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:17.896898 containerd[1572]: time="2025-09-16T04:54:17.896055494Z" level=info msg="connecting to shim dddd27e9e26a83da29f7d7bf08fe90f8db137df527f969dd3973504378b9f684" address="unix:///run/containerd/s/caceef2f803994bbac848627aeae48c7cce2c2dd0a2232d5a75f436125fc3c69" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:17.909418 kubelet[2361]: W0916 04:54:17.909355 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Sep 16 04:54:17.909594 kubelet[2361]: E0916 04:54:17.909559 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:17.917673 containerd[1572]: time="2025-09-16T04:54:17.917491965Z" level=info msg="connecting to shim 5b48eb4011f52921b5c2d41b2a9b3a37fd5bc992a3af55ed2fdbfe1a3303f0f5" address="unix:///run/containerd/s/14576ee93cddf079e347ac49914613033941ddc178ecdf5467ad87409c45ebc6" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:17.930000 systemd[1]: Started cri-containerd-c484fefaac7021981d6d7be732a6a39f9bd1dc284e423f3047519e92b1b14049.scope - libcontainer container c484fefaac7021981d6d7be732a6a39f9bd1dc284e423f3047519e92b1b14049. Sep 16 04:54:17.938752 systemd[1]: Started cri-containerd-dddd27e9e26a83da29f7d7bf08fe90f8db137df527f969dd3973504378b9f684.scope - libcontainer container dddd27e9e26a83da29f7d7bf08fe90f8db137df527f969dd3973504378b9f684. Sep 16 04:54:17.958017 systemd[1]: Started cri-containerd-5b48eb4011f52921b5c2d41b2a9b3a37fd5bc992a3af55ed2fdbfe1a3303f0f5.scope - libcontainer container 5b48eb4011f52921b5c2d41b2a9b3a37fd5bc992a3af55ed2fdbfe1a3303f0f5. Sep 16 04:54:18.009105 containerd[1572]: time="2025-09-16T04:54:18.008848051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c484fefaac7021981d6d7be732a6a39f9bd1dc284e423f3047519e92b1b14049\"" Sep 16 04:54:18.012808 containerd[1572]: time="2025-09-16T04:54:18.012752413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d8c0469a896bfd9c0c49e01a39d8619c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dddd27e9e26a83da29f7d7bf08fe90f8db137df527f969dd3973504378b9f684\"" Sep 16 04:54:18.014692 containerd[1572]: time="2025-09-16T04:54:18.014638470Z" level=info msg="CreateContainer within sandbox \"c484fefaac7021981d6d7be732a6a39f9bd1dc284e423f3047519e92b1b14049\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:54:18.015480 containerd[1572]: time="2025-09-16T04:54:18.015432469Z" level=info msg="CreateContainer within sandbox \"dddd27e9e26a83da29f7d7bf08fe90f8db137df527f969dd3973504378b9f684\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:54:18.027137 containerd[1572]: time="2025-09-16T04:54:18.027090762Z" level=info msg="Container eb8373d6ec58c209c3ba0639c79d56088e23e3e7939c681fe71af2941eb461f8: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:18.029895 containerd[1572]: time="2025-09-16T04:54:18.029828807Z" level=info msg="Container d39ce42b698d129ffc22d360e060659fb0fe02ef417ed415996d5b8fb5f32dcc: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:18.030615 containerd[1572]: time="2025-09-16T04:54:18.030588251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b48eb4011f52921b5c2d41b2a9b3a37fd5bc992a3af55ed2fdbfe1a3303f0f5\"" Sep 16 04:54:18.032409 kubelet[2361]: W0916 04:54:18.032337 2361 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused Sep 16 04:54:18.032482 kubelet[2361]: E0916 04:54:18.032427 2361 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:54:18.033884 containerd[1572]: time="2025-09-16T04:54:18.033841833Z" level=info msg="CreateContainer within sandbox \"5b48eb4011f52921b5c2d41b2a9b3a37fd5bc992a3af55ed2fdbfe1a3303f0f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:54:18.038135 containerd[1572]: time="2025-09-16T04:54:18.038076434Z" level=info msg="CreateContainer within sandbox \"dddd27e9e26a83da29f7d7bf08fe90f8db137df527f969dd3973504378b9f684\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb8373d6ec58c209c3ba0639c79d56088e23e3e7939c681fe71af2941eb461f8\"" Sep 16 04:54:18.038681 containerd[1572]: time="2025-09-16T04:54:18.038641765Z" level=info msg="StartContainer for \"eb8373d6ec58c209c3ba0639c79d56088e23e3e7939c681fe71af2941eb461f8\"" Sep 16 04:54:18.039838 containerd[1572]: time="2025-09-16T04:54:18.039809855Z" level=info msg="connecting to shim eb8373d6ec58c209c3ba0639c79d56088e23e3e7939c681fe71af2941eb461f8" address="unix:///run/containerd/s/caceef2f803994bbac848627aeae48c7cce2c2dd0a2232d5a75f436125fc3c69" protocol=ttrpc version=3 Sep 16 04:54:18.044219 containerd[1572]: time="2025-09-16T04:54:18.044172667Z" level=info msg="CreateContainer within sandbox \"c484fefaac7021981d6d7be732a6a39f9bd1dc284e423f3047519e92b1b14049\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d39ce42b698d129ffc22d360e060659fb0fe02ef417ed415996d5b8fb5f32dcc\"" Sep 16 04:54:18.044726 containerd[1572]: time="2025-09-16T04:54:18.044660271Z" level=info msg="StartContainer for \"d39ce42b698d129ffc22d360e060659fb0fe02ef417ed415996d5b8fb5f32dcc\"" Sep 16 04:54:18.045671 containerd[1572]: time="2025-09-16T04:54:18.045642924Z" level=info msg="connecting to shim d39ce42b698d129ffc22d360e060659fb0fe02ef417ed415996d5b8fb5f32dcc" address="unix:///run/containerd/s/b92f9f9fcc3659ea2e48e615308d3a8fef4fe0e1fcd5ba7d405603fe7a741c30" protocol=ttrpc version=3 Sep 16 04:54:18.048624 containerd[1572]: time="2025-09-16T04:54:18.048577999Z" level=info msg="Container 40e7b3933e313d2a5fdc46dd9adaf7b98369b2f7699fbcb1c2d2281bae13e88b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:18.057034 containerd[1572]: time="2025-09-16T04:54:18.056954217Z" level=info msg="CreateContainer within sandbox \"5b48eb4011f52921b5c2d41b2a9b3a37fd5bc992a3af55ed2fdbfe1a3303f0f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40e7b3933e313d2a5fdc46dd9adaf7b98369b2f7699fbcb1c2d2281bae13e88b\"" Sep 16 04:54:18.057911 containerd[1572]: time="2025-09-16T04:54:18.057878851Z" level=info msg="StartContainer for \"40e7b3933e313d2a5fdc46dd9adaf7b98369b2f7699fbcb1c2d2281bae13e88b\"" Sep 16 04:54:18.060930 containerd[1572]: time="2025-09-16T04:54:18.060884538Z" level=info msg="connecting to shim 40e7b3933e313d2a5fdc46dd9adaf7b98369b2f7699fbcb1c2d2281bae13e88b" address="unix:///run/containerd/s/14576ee93cddf079e347ac49914613033941ddc178ecdf5467ad87409c45ebc6" protocol=ttrpc version=3 Sep 16 04:54:18.067161 systemd[1]: Started cri-containerd-eb8373d6ec58c209c3ba0639c79d56088e23e3e7939c681fe71af2941eb461f8.scope - libcontainer container eb8373d6ec58c209c3ba0639c79d56088e23e3e7939c681fe71af2941eb461f8. Sep 16 04:54:18.078062 systemd[1]: Started cri-containerd-d39ce42b698d129ffc22d360e060659fb0fe02ef417ed415996d5b8fb5f32dcc.scope - libcontainer container d39ce42b698d129ffc22d360e060659fb0fe02ef417ed415996d5b8fb5f32dcc. Sep 16 04:54:18.083358 systemd[1]: Started cri-containerd-40e7b3933e313d2a5fdc46dd9adaf7b98369b2f7699fbcb1c2d2281bae13e88b.scope - libcontainer container 40e7b3933e313d2a5fdc46dd9adaf7b98369b2f7699fbcb1c2d2281bae13e88b. Sep 16 04:54:18.155900 containerd[1572]: time="2025-09-16T04:54:18.155764622Z" level=info msg="StartContainer for \"40e7b3933e313d2a5fdc46dd9adaf7b98369b2f7699fbcb1c2d2281bae13e88b\" returns successfully" Sep 16 04:54:18.157066 kubelet[2361]: I0916 04:54:18.156360 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:54:18.157066 kubelet[2361]: E0916 04:54:18.156965 2361 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" Sep 16 04:54:18.160121 containerd[1572]: time="2025-09-16T04:54:18.160091155Z" level=info msg="StartContainer for \"d39ce42b698d129ffc22d360e060659fb0fe02ef417ed415996d5b8fb5f32dcc\" returns successfully" Sep 16 04:54:18.173519 containerd[1572]: time="2025-09-16T04:54:18.173481748Z" level=info msg="StartContainer for \"eb8373d6ec58c209c3ba0639c79d56088e23e3e7939c681fe71af2941eb461f8\" returns successfully" Sep 16 04:54:18.220060 kubelet[2361]: E0916 04:54:18.219911 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" Sep 16 04:54:18.962785 kubelet[2361]: I0916 04:54:18.959613 2361 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:54:19.498234 kubelet[2361]: I0916 04:54:19.498170 2361 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:54:19.498234 kubelet[2361]: E0916 04:54:19.498219 2361 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 16 04:54:19.798311 kubelet[2361]: I0916 04:54:19.798252 2361 apiserver.go:52] "Watching apiserver" Sep 16 04:54:19.816676 kubelet[2361]: I0916 04:54:19.816631 2361 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:54:19.906720 kubelet[2361]: E0916 04:54:19.906670 2361 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:21.400887 systemd[1]: Reload requested from client PID 2635 ('systemctl') (unit session-7.scope)... Sep 16 04:54:21.400906 systemd[1]: Reloading... Sep 16 04:54:21.511905 zram_generator::config[2678]: No configuration found. Sep 16 04:54:21.755174 systemd[1]: Reloading finished in 353 ms. Sep 16 04:54:21.786507 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:21.810943 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:54:21.811358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:21.811434 systemd[1]: kubelet.service: Consumed 1.017s CPU time, 131.5M memory peak. Sep 16 04:54:21.815021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:54:22.040898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:54:22.052189 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:54:22.088379 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:22.088379 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 16 04:54:22.088379 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:54:22.088832 kubelet[2723]: I0916 04:54:22.088449 2723 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:54:22.094415 kubelet[2723]: I0916 04:54:22.094377 2723 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 16 04:54:22.094415 kubelet[2723]: I0916 04:54:22.094397 2723 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:54:22.094587 kubelet[2723]: I0916 04:54:22.094581 2723 server.go:934] "Client rotation is on, will bootstrap in background" Sep 16 04:54:22.095681 kubelet[2723]: I0916 04:54:22.095655 2723 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:54:22.142297 kubelet[2723]: I0916 04:54:22.142240 2723 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:54:22.147060 kubelet[2723]: I0916 04:54:22.147027 2723 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:54:22.151678 kubelet[2723]: I0916 04:54:22.151639 2723 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:54:22.151852 kubelet[2723]: I0916 04:54:22.151737 2723 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 16 04:54:22.151955 kubelet[2723]: I0916 04:54:22.151915 2723 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:54:22.152098 kubelet[2723]: I0916 04:54:22.151947 2723 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:54:22.152183 kubelet[2723]: I0916 04:54:22.152104 2723 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:54:22.152183 kubelet[2723]: I0916 04:54:22.152113 2723 container_manager_linux.go:300] "Creating device plugin manager" Sep 16 04:54:22.152183 kubelet[2723]: I0916 04:54:22.152139 2723 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:22.152253 kubelet[2723]: I0916 04:54:22.152244 2723 kubelet.go:408] "Attempting to sync node with API server" Sep 16 04:54:22.152277 kubelet[2723]: I0916 04:54:22.152255 2723 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:54:22.152304 kubelet[2723]: I0916 04:54:22.152286 2723 kubelet.go:314] "Adding apiserver pod source" Sep 16 04:54:22.152304 kubelet[2723]: I0916 04:54:22.152299 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:54:22.154891 kubelet[2723]: I0916 04:54:22.152788 2723 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:54:22.154891 kubelet[2723]: I0916 04:54:22.153240 2723 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:54:22.154891 kubelet[2723]: I0916 04:54:22.153650 2723 server.go:1274] "Started kubelet" Sep 16 04:54:22.154891 kubelet[2723]: I0916 04:54:22.154074 2723 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:54:22.155035 kubelet[2723]: I0916 04:54:22.154931 2723 server.go:449] "Adding debug handlers to kubelet server" Sep 16 04:54:22.156045 kubelet[2723]: I0916 04:54:22.156015 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:54:22.156250 kubelet[2723]: I0916 04:54:22.156228 2723 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:54:22.159112 kubelet[2723]: I0916 04:54:22.159087 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:54:22.160849 kubelet[2723]: I0916 04:54:22.160813 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:54:22.164323 kubelet[2723]: E0916 04:54:22.164250 2723 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:54:22.164709 kubelet[2723]: I0916 04:54:22.164677 2723 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 16 04:54:22.165022 kubelet[2723]: I0916 04:54:22.164982 2723 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 16 04:54:22.165196 kubelet[2723]: I0916 04:54:22.165163 2723 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:54:22.166399 kubelet[2723]: I0916 04:54:22.166358 2723 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:54:22.166494 kubelet[2723]: I0916 04:54:22.166480 2723 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:54:22.169505 kubelet[2723]: I0916 04:54:22.169458 2723 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:54:22.173438 kubelet[2723]: I0916 04:54:22.173408 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:54:22.175156 kubelet[2723]: I0916 04:54:22.175136 2723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:54:22.175212 kubelet[2723]: I0916 04:54:22.175161 2723 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 16 04:54:22.175293 kubelet[2723]: I0916 04:54:22.175280 2723 kubelet.go:2321] "Starting kubelet main sync loop" Sep 16 04:54:22.175338 kubelet[2723]: E0916 04:54:22.175320 2723 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:54:22.235314 kubelet[2723]: I0916 04:54:22.235271 2723 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 16 04:54:22.235314 kubelet[2723]: I0916 04:54:22.235291 2723 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 16 04:54:22.235314 kubelet[2723]: I0916 04:54:22.235313 2723 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:54:22.235501 kubelet[2723]: I0916 04:54:22.235460 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:54:22.235501 kubelet[2723]: I0916 04:54:22.235470 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:54:22.235501 kubelet[2723]: I0916 04:54:22.235486 2723 policy_none.go:49] "None policy: Start" Sep 16 04:54:22.236013 kubelet[2723]: I0916 04:54:22.235995 2723 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 16 04:54:22.236013 kubelet[2723]: I0916 04:54:22.236016 2723 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:54:22.236149 kubelet[2723]: I0916 04:54:22.236135 2723 state_mem.go:75] "Updated machine memory state" Sep 16 04:54:22.240471 kubelet[2723]: I0916 04:54:22.240446 2723 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:54:22.240658 kubelet[2723]: I0916 04:54:22.240642 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:54:22.240691 kubelet[2723]: I0916 04:54:22.240655 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:54:22.241727 kubelet[2723]: I0916 04:54:22.240899 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:54:22.345021 kubelet[2723]: I0916 04:54:22.344896 2723 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 16 04:54:22.366444 kubelet[2723]: I0916 04:54:22.366392 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 16 04:54:22.366613 kubelet[2723]: I0916 04:54:22.366439 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8c0469a896bfd9c0c49e01a39d8619c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8c0469a896bfd9c0c49e01a39d8619c\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:22.366689 kubelet[2723]: I0916 04:54:22.366670 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:22.366718 kubelet[2723]: I0916 04:54:22.366701 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:22.366746 kubelet[2723]: I0916 04:54:22.366726 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:22.366900 kubelet[2723]: I0916 04:54:22.366846 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:22.366942 kubelet[2723]: I0916 04:54:22.366907 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8c0469a896bfd9c0c49e01a39d8619c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d8c0469a896bfd9c0c49e01a39d8619c\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:22.366942 kubelet[2723]: I0916 04:54:22.366931 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8c0469a896bfd9c0c49e01a39d8619c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d8c0469a896bfd9c0c49e01a39d8619c\") " pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:22.367022 kubelet[2723]: I0916 04:54:22.366953 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:22.458428 kubelet[2723]: I0916 04:54:22.458381 2723 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 16 04:54:22.458605 kubelet[2723]: I0916 04:54:22.458490 2723 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 16 04:54:23.153393 kubelet[2723]: I0916 04:54:23.153319 2723 apiserver.go:52] "Watching apiserver" Sep 16 04:54:23.165676 kubelet[2723]: I0916 04:54:23.165650 2723 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 16 04:54:23.477276 kubelet[2723]: E0916 04:54:23.477073 2723 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 16 04:54:23.477764 kubelet[2723]: E0916 04:54:23.477701 2723 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 16 04:54:23.478345 kubelet[2723]: I0916 04:54:23.478230 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.478211363 podStartE2EDuration="1.478211363s" podCreationTimestamp="2025-09-16 04:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:23.477533144 +0000 UTC m=+1.421348552" watchObservedRunningTime="2025-09-16 04:54:23.478211363 +0000 UTC m=+1.422026761" Sep 16 04:54:23.478424 kubelet[2723]: E0916 04:54:23.478405 2723 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 16 04:54:23.838270 kubelet[2723]: I0916 04:54:23.838210 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.838190083 podStartE2EDuration="1.838190083s" podCreationTimestamp="2025-09-16 04:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:23.740814228 +0000 UTC m=+1.684629636" watchObservedRunningTime="2025-09-16 04:54:23.838190083 +0000 UTC m=+1.782005491" Sep 16 04:54:23.838459 kubelet[2723]: I0916 04:54:23.838312 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8383064949999999 podStartE2EDuration="1.838306495s" podCreationTimestamp="2025-09-16 04:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:23.838148824 +0000 UTC m=+1.781964222" watchObservedRunningTime="2025-09-16 04:54:23.838306495 +0000 UTC m=+1.782121903" Sep 16 04:54:27.628823 kubelet[2723]: I0916 04:54:27.628738 2723 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:54:27.629536 containerd[1572]: time="2025-09-16T04:54:27.629266493Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:54:27.629911 kubelet[2723]: I0916 04:54:27.629561 2723 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:54:28.429105 systemd[1]: Created slice kubepods-besteffort-pod581f9d10_315d_45ac_951e_5a4d45e5622e.slice - libcontainer container kubepods-besteffort-pod581f9d10_315d_45ac_951e_5a4d45e5622e.slice. Sep 16 04:54:28.505566 kubelet[2723]: I0916 04:54:28.505502 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/581f9d10-315d-45ac-951e-5a4d45e5622e-xtables-lock\") pod \"kube-proxy-zz4xg\" (UID: \"581f9d10-315d-45ac-951e-5a4d45e5622e\") " pod="kube-system/kube-proxy-zz4xg" Sep 16 04:54:28.505566 kubelet[2723]: I0916 04:54:28.505558 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86kwv\" (UniqueName: \"kubernetes.io/projected/581f9d10-315d-45ac-951e-5a4d45e5622e-kube-api-access-86kwv\") pod \"kube-proxy-zz4xg\" (UID: \"581f9d10-315d-45ac-951e-5a4d45e5622e\") " pod="kube-system/kube-proxy-zz4xg" Sep 16 04:54:28.505761 kubelet[2723]: I0916 04:54:28.505584 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/581f9d10-315d-45ac-951e-5a4d45e5622e-kube-proxy\") pod \"kube-proxy-zz4xg\" (UID: \"581f9d10-315d-45ac-951e-5a4d45e5622e\") " pod="kube-system/kube-proxy-zz4xg" Sep 16 04:54:28.505761 kubelet[2723]: I0916 04:54:28.505599 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/581f9d10-315d-45ac-951e-5a4d45e5622e-lib-modules\") pod \"kube-proxy-zz4xg\" (UID: \"581f9d10-315d-45ac-951e-5a4d45e5622e\") " pod="kube-system/kube-proxy-zz4xg" Sep 16 04:54:29.020232 systemd[1]: Created slice kubepods-besteffort-pod6e37260d_d1a1_4c54_8664_dbbd6b232cb8.slice - libcontainer container kubepods-besteffort-pod6e37260d_d1a1_4c54_8664_dbbd6b232cb8.slice. Sep 16 04:54:29.038406 containerd[1572]: time="2025-09-16T04:54:29.038355883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zz4xg,Uid:581f9d10-315d-45ac-951e-5a4d45e5622e,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:29.064339 containerd[1572]: time="2025-09-16T04:54:29.064284267Z" level=info msg="connecting to shim c7907dd94bdf4cc413f2f77c1b628351b63c0155f9ffc4d94825ab06e7e3e96a" address="unix:///run/containerd/s/22b1c65f97b9fafe1eec5fa7b591809dd5268242ceb07cefb827272d82cd6bb1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:29.104185 systemd[1]: Started cri-containerd-c7907dd94bdf4cc413f2f77c1b628351b63c0155f9ffc4d94825ab06e7e3e96a.scope - libcontainer container c7907dd94bdf4cc413f2f77c1b628351b63c0155f9ffc4d94825ab06e7e3e96a. Sep 16 04:54:29.108768 kubelet[2723]: I0916 04:54:29.108714 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6e37260d-d1a1-4c54-8664-dbbd6b232cb8-var-lib-calico\") pod \"tigera-operator-58fc44c59b-n4c25\" (UID: \"6e37260d-d1a1-4c54-8664-dbbd6b232cb8\") " pod="tigera-operator/tigera-operator-58fc44c59b-n4c25" Sep 16 04:54:29.109273 kubelet[2723]: I0916 04:54:29.108777 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s7gt\" (UniqueName: \"kubernetes.io/projected/6e37260d-d1a1-4c54-8664-dbbd6b232cb8-kube-api-access-2s7gt\") pod \"tigera-operator-58fc44c59b-n4c25\" (UID: \"6e37260d-d1a1-4c54-8664-dbbd6b232cb8\") " pod="tigera-operator/tigera-operator-58fc44c59b-n4c25" Sep 16 04:54:29.139563 containerd[1572]: time="2025-09-16T04:54:29.139491011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zz4xg,Uid:581f9d10-315d-45ac-951e-5a4d45e5622e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7907dd94bdf4cc413f2f77c1b628351b63c0155f9ffc4d94825ab06e7e3e96a\"" Sep 16 04:54:29.142794 containerd[1572]: time="2025-09-16T04:54:29.142748811Z" level=info msg="CreateContainer within sandbox \"c7907dd94bdf4cc413f2f77c1b628351b63c0155f9ffc4d94825ab06e7e3e96a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:54:29.156126 containerd[1572]: time="2025-09-16T04:54:29.156069906Z" level=info msg="Container 83c8c7640503167efd92a1a1b2e93054df17ce4ff4d7455118568a6f6a15e07d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:29.171290 containerd[1572]: time="2025-09-16T04:54:29.171224670Z" level=info msg="CreateContainer within sandbox \"c7907dd94bdf4cc413f2f77c1b628351b63c0155f9ffc4d94825ab06e7e3e96a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83c8c7640503167efd92a1a1b2e93054df17ce4ff4d7455118568a6f6a15e07d\"" Sep 16 04:54:29.171895 containerd[1572]: time="2025-09-16T04:54:29.171795818Z" level=info msg="StartContainer for \"83c8c7640503167efd92a1a1b2e93054df17ce4ff4d7455118568a6f6a15e07d\"" Sep 16 04:54:29.173322 containerd[1572]: time="2025-09-16T04:54:29.173280342Z" level=info msg="connecting to shim 83c8c7640503167efd92a1a1b2e93054df17ce4ff4d7455118568a6f6a15e07d" address="unix:///run/containerd/s/22b1c65f97b9fafe1eec5fa7b591809dd5268242ceb07cefb827272d82cd6bb1" protocol=ttrpc version=3 Sep 16 04:54:29.199104 systemd[1]: Started cri-containerd-83c8c7640503167efd92a1a1b2e93054df17ce4ff4d7455118568a6f6a15e07d.scope - libcontainer container 83c8c7640503167efd92a1a1b2e93054df17ce4ff4d7455118568a6f6a15e07d. Sep 16 04:54:29.255127 containerd[1572]: time="2025-09-16T04:54:29.255028714Z" level=info msg="StartContainer for \"83c8c7640503167efd92a1a1b2e93054df17ce4ff4d7455118568a6f6a15e07d\" returns successfully" Sep 16 04:54:29.324077 containerd[1572]: time="2025-09-16T04:54:29.323928609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-n4c25,Uid:6e37260d-d1a1-4c54-8664-dbbd6b232cb8,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:54:29.347334 containerd[1572]: time="2025-09-16T04:54:29.347267335Z" level=info msg="connecting to shim d94c2e2ccee5185cdc7cfd043214c4eb8e8832229c57bee55004682585bdd62b" address="unix:///run/containerd/s/1186114b830e39c3bab425260f42e0f72ddd47986855c1d99f0a77c59fa62239" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:29.382194 systemd[1]: Started cri-containerd-d94c2e2ccee5185cdc7cfd043214c4eb8e8832229c57bee55004682585bdd62b.scope - libcontainer container d94c2e2ccee5185cdc7cfd043214c4eb8e8832229c57bee55004682585bdd62b. Sep 16 04:54:29.437463 containerd[1572]: time="2025-09-16T04:54:29.437400410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-n4c25,Uid:6e37260d-d1a1-4c54-8664-dbbd6b232cb8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d94c2e2ccee5185cdc7cfd043214c4eb8e8832229c57bee55004682585bdd62b\"" Sep 16 04:54:29.439388 containerd[1572]: time="2025-09-16T04:54:29.439347515Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:54:30.217180 kubelet[2723]: I0916 04:54:30.217072 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zz4xg" podStartSLOduration=2.217036244 podStartE2EDuration="2.217036244s" podCreationTimestamp="2025-09-16 04:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:54:30.216721757 +0000 UTC m=+8.160537165" watchObservedRunningTime="2025-09-16 04:54:30.217036244 +0000 UTC m=+8.160851653" Sep 16 04:54:31.110257 update_engine[1512]: I20250916 04:54:31.110144 1512 update_attempter.cc:509] Updating boot flags... Sep 16 04:54:31.426719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390875314.mount: Deactivated successfully. Sep 16 04:54:32.028253 containerd[1572]: time="2025-09-16T04:54:32.028173117Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:32.028983 containerd[1572]: time="2025-09-16T04:54:32.028905676Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 16 04:54:32.029949 containerd[1572]: time="2025-09-16T04:54:32.029904322Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:32.033296 containerd[1572]: time="2025-09-16T04:54:32.033232358Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:32.033848 containerd[1572]: time="2025-09-16T04:54:32.033801178Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.594422835s" Sep 16 04:54:32.033848 containerd[1572]: time="2025-09-16T04:54:32.033836635Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 16 04:54:32.036016 containerd[1572]: time="2025-09-16T04:54:32.035983159Z" level=info msg="CreateContainer within sandbox \"d94c2e2ccee5185cdc7cfd043214c4eb8e8832229c57bee55004682585bdd62b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:54:32.045631 containerd[1572]: time="2025-09-16T04:54:32.045575684Z" level=info msg="Container 724665accbd47d9d0ba09e149acf67bcab74d6b792b5d79598aac3367665741d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:32.054024 containerd[1572]: time="2025-09-16T04:54:32.053980205Z" level=info msg="CreateContainer within sandbox \"d94c2e2ccee5185cdc7cfd043214c4eb8e8832229c57bee55004682585bdd62b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"724665accbd47d9d0ba09e149acf67bcab74d6b792b5d79598aac3367665741d\"" Sep 16 04:54:32.054579 containerd[1572]: time="2025-09-16T04:54:32.054552601Z" level=info msg="StartContainer for \"724665accbd47d9d0ba09e149acf67bcab74d6b792b5d79598aac3367665741d\"" Sep 16 04:54:32.055520 containerd[1572]: time="2025-09-16T04:54:32.055495110Z" level=info msg="connecting to shim 724665accbd47d9d0ba09e149acf67bcab74d6b792b5d79598aac3367665741d" address="unix:///run/containerd/s/1186114b830e39c3bab425260f42e0f72ddd47986855c1d99f0a77c59fa62239" protocol=ttrpc version=3 Sep 16 04:54:32.112155 systemd[1]: Started cri-containerd-724665accbd47d9d0ba09e149acf67bcab74d6b792b5d79598aac3367665741d.scope - libcontainer container 724665accbd47d9d0ba09e149acf67bcab74d6b792b5d79598aac3367665741d. Sep 16 04:54:32.442995 containerd[1572]: time="2025-09-16T04:54:32.442813102Z" level=info msg="StartContainer for \"724665accbd47d9d0ba09e149acf67bcab74d6b792b5d79598aac3367665741d\" returns successfully" Sep 16 04:54:38.574498 sudo[1785]: pam_unix(sudo:session): session closed for user root Sep 16 04:54:38.576910 sshd[1784]: Connection closed by 10.0.0.1 port 47120 Sep 16 04:54:38.577636 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Sep 16 04:54:38.583677 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:47120.service: Deactivated successfully. Sep 16 04:54:38.587408 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:54:38.588142 systemd[1]: session-7.scope: Consumed 5.110s CPU time, 223.8M memory peak. Sep 16 04:54:38.590906 systemd-logind[1508]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:54:38.594253 systemd-logind[1508]: Removed session 7. Sep 16 04:54:42.207153 kubelet[2723]: I0916 04:54:42.207072 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-n4c25" podStartSLOduration=11.611205424 podStartE2EDuration="14.207048246s" podCreationTimestamp="2025-09-16 04:54:28 +0000 UTC" firstStartedPulling="2025-09-16 04:54:29.438756551 +0000 UTC m=+7.382571959" lastFinishedPulling="2025-09-16 04:54:32.034599373 +0000 UTC m=+9.978414781" observedRunningTime="2025-09-16 04:54:33.496891366 +0000 UTC m=+11.440706774" watchObservedRunningTime="2025-09-16 04:54:42.207048246 +0000 UTC m=+20.150863654" Sep 16 04:54:42.209773 kubelet[2723]: W0916 04:54:42.209719 2723 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 16 04:54:42.210875 kubelet[2723]: E0916 04:54:42.210259 2723 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 16 04:54:42.219103 systemd[1]: Created slice kubepods-besteffort-podb4d13110_3883_483e_8a1c_0994abd1c9bd.slice - libcontainer container kubepods-besteffort-podb4d13110_3883_483e_8a1c_0994abd1c9bd.slice. Sep 16 04:54:42.292233 kubelet[2723]: I0916 04:54:42.292068 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hllv2\" (UniqueName: \"kubernetes.io/projected/b4d13110-3883-483e-8a1c-0994abd1c9bd-kube-api-access-hllv2\") pod \"calico-typha-cc48b54bc-6s459\" (UID: \"b4d13110-3883-483e-8a1c-0994abd1c9bd\") " pod="calico-system/calico-typha-cc48b54bc-6s459" Sep 16 04:54:42.292233 kubelet[2723]: I0916 04:54:42.292134 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d13110-3883-483e-8a1c-0994abd1c9bd-tigera-ca-bundle\") pod \"calico-typha-cc48b54bc-6s459\" (UID: \"b4d13110-3883-483e-8a1c-0994abd1c9bd\") " pod="calico-system/calico-typha-cc48b54bc-6s459" Sep 16 04:54:42.292233 kubelet[2723]: I0916 04:54:42.292152 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b4d13110-3883-483e-8a1c-0994abd1c9bd-typha-certs\") pod \"calico-typha-cc48b54bc-6s459\" (UID: \"b4d13110-3883-483e-8a1c-0994abd1c9bd\") " pod="calico-system/calico-typha-cc48b54bc-6s459" Sep 16 04:54:42.603716 systemd[1]: Created slice kubepods-besteffort-podff2878a7_e34f_4fe6_b0b7_fb1c0a91d852.slice - libcontainer container kubepods-besteffort-podff2878a7_e34f_4fe6_b0b7_fb1c0a91d852.slice. Sep 16 04:54:42.695476 kubelet[2723]: I0916 04:54:42.695426 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-policysync\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695476 kubelet[2723]: I0916 04:54:42.695473 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-xtables-lock\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695679 kubelet[2723]: I0916 04:54:42.695496 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5949l\" (UniqueName: \"kubernetes.io/projected/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-kube-api-access-5949l\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695679 kubelet[2723]: I0916 04:54:42.695528 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-cni-net-dir\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695729 kubelet[2723]: I0916 04:54:42.695646 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-flexvol-driver-host\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695769 kubelet[2723]: I0916 04:54:42.695724 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-tigera-ca-bundle\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695798 kubelet[2723]: I0916 04:54:42.695786 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-cni-bin-dir\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695826 kubelet[2723]: I0916 04:54:42.695811 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-cni-log-dir\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695854 kubelet[2723]: I0916 04:54:42.695834 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-node-certs\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695897 kubelet[2723]: I0916 04:54:42.695879 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-var-lib-calico\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695924 kubelet[2723]: I0916 04:54:42.695905 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-var-run-calico\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.695950 kubelet[2723]: I0916 04:54:42.695927 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852-lib-modules\") pod \"calico-node-bm9nh\" (UID: \"ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852\") " pod="calico-system/calico-node-bm9nh" Sep 16 04:54:42.799009 kubelet[2723]: E0916 04:54:42.798924 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.799009 kubelet[2723]: W0916 04:54:42.798990 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.799242 kubelet[2723]: E0916 04:54:42.799058 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.802547 kubelet[2723]: E0916 04:54:42.802507 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.802547 kubelet[2723]: W0916 04:54:42.802532 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.802547 kubelet[2723]: E0916 04:54:42.802549 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.806045 kubelet[2723]: E0916 04:54:42.806018 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.806045 kubelet[2723]: W0916 04:54:42.806035 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.806045 kubelet[2723]: E0916 04:54:42.806048 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.839356 kubelet[2723]: E0916 04:54:42.839272 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:42.883319 kubelet[2723]: E0916 04:54:42.883142 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.883319 kubelet[2723]: W0916 04:54:42.883172 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.883319 kubelet[2723]: E0916 04:54:42.883199 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.883665 kubelet[2723]: E0916 04:54:42.883610 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.883713 kubelet[2723]: W0916 04:54:42.883666 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.883713 kubelet[2723]: E0916 04:54:42.883706 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.884237 kubelet[2723]: E0916 04:54:42.884189 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.884237 kubelet[2723]: W0916 04:54:42.884210 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.884237 kubelet[2723]: E0916 04:54:42.884225 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.884881 kubelet[2723]: E0916 04:54:42.884546 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.884881 kubelet[2723]: W0916 04:54:42.884568 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.884881 kubelet[2723]: E0916 04:54:42.884586 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.885107 kubelet[2723]: E0916 04:54:42.885027 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.885107 kubelet[2723]: W0916 04:54:42.885075 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.885107 kubelet[2723]: E0916 04:54:42.885113 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.885479 kubelet[2723]: E0916 04:54:42.885453 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.885479 kubelet[2723]: W0916 04:54:42.885472 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.885566 kubelet[2723]: E0916 04:54:42.885485 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.885817 kubelet[2723]: E0916 04:54:42.885778 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.885817 kubelet[2723]: W0916 04:54:42.885809 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.885900 kubelet[2723]: E0916 04:54:42.885823 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.886220 kubelet[2723]: E0916 04:54:42.886198 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.886220 kubelet[2723]: W0916 04:54:42.886215 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.886220 kubelet[2723]: E0916 04:54:42.886228 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.886519 kubelet[2723]: E0916 04:54:42.886496 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.886519 kubelet[2723]: W0916 04:54:42.886513 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.886586 kubelet[2723]: E0916 04:54:42.886525 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.886771 kubelet[2723]: E0916 04:54:42.886751 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.886771 kubelet[2723]: W0916 04:54:42.886766 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.886854 kubelet[2723]: E0916 04:54:42.886778 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.887145 kubelet[2723]: E0916 04:54:42.887102 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.887193 kubelet[2723]: W0916 04:54:42.887137 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.887230 kubelet[2723]: E0916 04:54:42.887175 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.887616 kubelet[2723]: E0916 04:54:42.887572 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.887616 kubelet[2723]: W0916 04:54:42.887606 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.887616 kubelet[2723]: E0916 04:54:42.887622 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.887947 kubelet[2723]: E0916 04:54:42.887921 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.887947 kubelet[2723]: W0916 04:54:42.887938 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.888051 kubelet[2723]: E0916 04:54:42.887954 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.888195 kubelet[2723]: E0916 04:54:42.888171 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.888195 kubelet[2723]: W0916 04:54:42.888187 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.888285 kubelet[2723]: E0916 04:54:42.888200 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.888523 kubelet[2723]: E0916 04:54:42.888500 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.888523 kubelet[2723]: W0916 04:54:42.888518 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.888628 kubelet[2723]: E0916 04:54:42.888536 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.888802 kubelet[2723]: E0916 04:54:42.888769 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.888802 kubelet[2723]: W0916 04:54:42.888794 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.889394 kubelet[2723]: E0916 04:54:42.888807 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.889394 kubelet[2723]: E0916 04:54:42.889085 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.889394 kubelet[2723]: W0916 04:54:42.889097 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.889394 kubelet[2723]: E0916 04:54:42.889110 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.889394 kubelet[2723]: E0916 04:54:42.889392 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.889571 kubelet[2723]: W0916 04:54:42.889408 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.889571 kubelet[2723]: E0916 04:54:42.889422 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.889682 kubelet[2723]: E0916 04:54:42.889645 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.889682 kubelet[2723]: W0916 04:54:42.889675 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.889783 kubelet[2723]: E0916 04:54:42.889689 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.890015 kubelet[2723]: E0916 04:54:42.889992 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.890055 kubelet[2723]: W0916 04:54:42.890012 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.890055 kubelet[2723]: E0916 04:54:42.890028 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.898726 kubelet[2723]: E0916 04:54:42.898674 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.898726 kubelet[2723]: W0916 04:54:42.898716 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.898807 kubelet[2723]: E0916 04:54:42.898759 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.898839 kubelet[2723]: I0916 04:54:42.898804 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v8bc\" (UniqueName: \"kubernetes.io/projected/6287dfa0-7875-4f8b-8630-23e2f6643cbc-kube-api-access-8v8bc\") pod \"csi-node-driver-ld4qv\" (UID: \"6287dfa0-7875-4f8b-8630-23e2f6643cbc\") " pod="calico-system/csi-node-driver-ld4qv" Sep 16 04:54:42.899215 kubelet[2723]: E0916 04:54:42.899157 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.899215 kubelet[2723]: W0916 04:54:42.899182 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.899215 kubelet[2723]: E0916 04:54:42.899208 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.899475 kubelet[2723]: I0916 04:54:42.899245 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6287dfa0-7875-4f8b-8630-23e2f6643cbc-varrun\") pod \"csi-node-driver-ld4qv\" (UID: \"6287dfa0-7875-4f8b-8630-23e2f6643cbc\") " pod="calico-system/csi-node-driver-ld4qv" Sep 16 04:54:42.899513 kubelet[2723]: E0916 04:54:42.899491 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.899513 kubelet[2723]: W0916 04:54:42.899502 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.899579 kubelet[2723]: E0916 04:54:42.899520 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.899579 kubelet[2723]: I0916 04:54:42.899539 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6287dfa0-7875-4f8b-8630-23e2f6643cbc-kubelet-dir\") pod \"csi-node-driver-ld4qv\" (UID: \"6287dfa0-7875-4f8b-8630-23e2f6643cbc\") " pod="calico-system/csi-node-driver-ld4qv" Sep 16 04:54:42.899801 kubelet[2723]: E0916 04:54:42.899763 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.899801 kubelet[2723]: W0916 04:54:42.899783 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.899801 kubelet[2723]: E0916 04:54:42.899803 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.900060 kubelet[2723]: E0916 04:54:42.900027 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.900060 kubelet[2723]: W0916 04:54:42.900044 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.900060 kubelet[2723]: E0916 04:54:42.900062 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.900513 kubelet[2723]: E0916 04:54:42.900452 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.900513 kubelet[2723]: W0916 04:54:42.900497 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.900602 kubelet[2723]: E0916 04:54:42.900540 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.900791 kubelet[2723]: E0916 04:54:42.900767 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.900791 kubelet[2723]: W0916 04:54:42.900784 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.900930 kubelet[2723]: E0916 04:54:42.900822 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.901137 kubelet[2723]: E0916 04:54:42.901075 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.901137 kubelet[2723]: W0916 04:54:42.901097 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.901137 kubelet[2723]: E0916 04:54:42.901137 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.901356 kubelet[2723]: E0916 04:54:42.901335 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.901356 kubelet[2723]: W0916 04:54:42.901351 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.901445 kubelet[2723]: E0916 04:54:42.901372 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.901445 kubelet[2723]: I0916 04:54:42.901414 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6287dfa0-7875-4f8b-8630-23e2f6643cbc-socket-dir\") pod \"csi-node-driver-ld4qv\" (UID: \"6287dfa0-7875-4f8b-8630-23e2f6643cbc\") " pod="calico-system/csi-node-driver-ld4qv" Sep 16 04:54:42.901665 kubelet[2723]: E0916 04:54:42.901641 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.901665 kubelet[2723]: W0916 04:54:42.901659 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.901775 kubelet[2723]: E0916 04:54:42.901675 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.901958 kubelet[2723]: E0916 04:54:42.901934 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.901958 kubelet[2723]: W0916 04:54:42.901953 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.902051 kubelet[2723]: E0916 04:54:42.901977 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.902268 kubelet[2723]: E0916 04:54:42.902223 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.902268 kubelet[2723]: W0916 04:54:42.902243 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.902268 kubelet[2723]: E0916 04:54:42.902265 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.902549 kubelet[2723]: E0916 04:54:42.902455 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.902549 kubelet[2723]: W0916 04:54:42.902464 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.902549 kubelet[2723]: E0916 04:54:42.902475 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.902719 kubelet[2723]: E0916 04:54:42.902680 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.902719 kubelet[2723]: W0916 04:54:42.902695 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.902719 kubelet[2723]: E0916 04:54:42.902704 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.902888 kubelet[2723]: I0916 04:54:42.902728 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6287dfa0-7875-4f8b-8630-23e2f6643cbc-registration-dir\") pod \"csi-node-driver-ld4qv\" (UID: \"6287dfa0-7875-4f8b-8630-23e2f6643cbc\") " pod="calico-system/csi-node-driver-ld4qv" Sep 16 04:54:42.902991 kubelet[2723]: E0916 04:54:42.902971 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.902991 kubelet[2723]: W0916 04:54:42.902984 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.902991 kubelet[2723]: E0916 04:54:42.902993 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.903186 kubelet[2723]: E0916 04:54:42.903167 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:42.903186 kubelet[2723]: W0916 04:54:42.903178 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:42.903186 kubelet[2723]: E0916 04:54:42.903187 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:42.907130 containerd[1572]: time="2025-09-16T04:54:42.907042763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bm9nh,Uid:ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:42.962570 containerd[1572]: time="2025-09-16T04:54:42.962178745Z" level=info msg="connecting to shim 144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa" address="unix:///run/containerd/s/9b2e8fbebef13517ce967d710b5fdd8ea0e92dbe3aaabedf9ad25547a3f2b4f5" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:42.996208 systemd[1]: Started cri-containerd-144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa.scope - libcontainer container 144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa. Sep 16 04:54:43.003882 kubelet[2723]: E0916 04:54:43.003687 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.003882 kubelet[2723]: W0916 04:54:43.003711 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.003882 kubelet[2723]: E0916 04:54:43.003744 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.004135 kubelet[2723]: E0916 04:54:43.004105 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.004135 kubelet[2723]: W0916 04:54:43.004118 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.004281 kubelet[2723]: E0916 04:54:43.004229 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.004575 kubelet[2723]: E0916 04:54:43.004562 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.004711 kubelet[2723]: W0916 04:54:43.004637 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.004711 kubelet[2723]: E0916 04:54:43.004659 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.005023 kubelet[2723]: E0916 04:54:43.004997 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.005023 kubelet[2723]: W0916 04:54:43.005008 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.005160 kubelet[2723]: E0916 04:54:43.005108 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.005455 kubelet[2723]: E0916 04:54:43.005442 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.005531 kubelet[2723]: W0916 04:54:43.005513 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.005654 kubelet[2723]: E0916 04:54:43.005639 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.005999 kubelet[2723]: E0916 04:54:43.005986 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.006115 kubelet[2723]: W0916 04:54:43.006063 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.006239 kubelet[2723]: E0916 04:54:43.006145 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.006365 kubelet[2723]: E0916 04:54:43.006353 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.006462 kubelet[2723]: W0916 04:54:43.006412 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.006503 kubelet[2723]: E0916 04:54:43.006475 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.006755 kubelet[2723]: E0916 04:54:43.006743 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.006873 kubelet[2723]: W0916 04:54:43.006811 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.006957 kubelet[2723]: E0916 04:54:43.006944 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.007209 kubelet[2723]: E0916 04:54:43.007184 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.007209 kubelet[2723]: W0916 04:54:43.007195 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.007399 kubelet[2723]: E0916 04:54:43.007343 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.007609 kubelet[2723]: E0916 04:54:43.007596 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.007711 kubelet[2723]: W0916 04:54:43.007656 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.007805 kubelet[2723]: E0916 04:54:43.007756 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.008226 kubelet[2723]: E0916 04:54:43.008210 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.008396 kubelet[2723]: W0916 04:54:43.008308 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.008428 kubelet[2723]: E0916 04:54:43.008382 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.008688 kubelet[2723]: E0916 04:54:43.008661 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.008688 kubelet[2723]: W0916 04:54:43.008674 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.008911 kubelet[2723]: E0916 04:54:43.008889 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.009160 kubelet[2723]: E0916 04:54:43.009148 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.009245 kubelet[2723]: W0916 04:54:43.009216 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.009349 kubelet[2723]: E0916 04:54:43.009326 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.009595 kubelet[2723]: E0916 04:54:43.009559 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.009595 kubelet[2723]: W0916 04:54:43.009570 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.009900 kubelet[2723]: E0916 04:54:43.009724 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.010062 kubelet[2723]: E0916 04:54:43.010049 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.010171 kubelet[2723]: W0916 04:54:43.010134 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.010284 kubelet[2723]: E0916 04:54:43.010269 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.010581 kubelet[2723]: E0916 04:54:43.010569 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.010689 kubelet[2723]: W0916 04:54:43.010636 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.010800 kubelet[2723]: E0916 04:54:43.010727 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.011568 kubelet[2723]: E0916 04:54:43.011555 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.011630 kubelet[2723]: W0916 04:54:43.011618 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.011725 kubelet[2723]: E0916 04:54:43.011713 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.012135 kubelet[2723]: E0916 04:54:43.012110 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.012135 kubelet[2723]: W0916 04:54:43.012121 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.012281 kubelet[2723]: E0916 04:54:43.012260 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.012719 kubelet[2723]: E0916 04:54:43.012659 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.012719 kubelet[2723]: W0916 04:54:43.012671 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.012904 kubelet[2723]: E0916 04:54:43.012752 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.013294 kubelet[2723]: E0916 04:54:43.013188 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.013294 kubelet[2723]: W0916 04:54:43.013201 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.013573 kubelet[2723]: E0916 04:54:43.013466 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.013828 kubelet[2723]: E0916 04:54:43.013814 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.013993 kubelet[2723]: W0916 04:54:43.013925 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.014187 kubelet[2723]: E0916 04:54:43.014174 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.014342 kubelet[2723]: E0916 04:54:43.014316 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.014342 kubelet[2723]: W0916 04:54:43.014328 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.014522 kubelet[2723]: E0916 04:54:43.014475 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.014758 kubelet[2723]: E0916 04:54:43.014724 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.014887 kubelet[2723]: W0916 04:54:43.014805 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.014963 kubelet[2723]: E0916 04:54:43.014899 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.015219 kubelet[2723]: E0916 04:54:43.015192 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.015219 kubelet[2723]: W0916 04:54:43.015204 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.015421 kubelet[2723]: E0916 04:54:43.015403 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.015701 kubelet[2723]: E0916 04:54:43.015688 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.015846 kubelet[2723]: W0916 04:54:43.015763 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.015947 kubelet[2723]: E0916 04:54:43.015933 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.016395 kubelet[2723]: E0916 04:54:43.016223 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.016395 kubelet[2723]: W0916 04:54:43.016238 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.016395 kubelet[2723]: E0916 04:54:43.016250 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.027012 kubelet[2723]: E0916 04:54:43.026983 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.027150 kubelet[2723]: W0916 04:54:43.027136 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.027212 kubelet[2723]: E0916 04:54:43.027200 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.034585 containerd[1572]: time="2025-09-16T04:54:43.034520751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bm9nh,Uid:ff2878a7-e34f-4fe6-b0b7-fb1c0a91d852,Namespace:calico-system,Attempt:0,} returns sandbox id \"144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa\"" Sep 16 04:54:43.036801 containerd[1572]: time="2025-09-16T04:54:43.036433930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:54:43.108265 kubelet[2723]: E0916 04:54:43.108215 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.108265 kubelet[2723]: W0916 04:54:43.108249 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.108265 kubelet[2723]: E0916 04:54:43.108278 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.209459 kubelet[2723]: E0916 04:54:43.209301 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.209459 kubelet[2723]: W0916 04:54:43.209337 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.209459 kubelet[2723]: E0916 04:54:43.209364 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.310177 kubelet[2723]: E0916 04:54:43.310116 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.310177 kubelet[2723]: W0916 04:54:43.310151 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.310177 kubelet[2723]: E0916 04:54:43.310179 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.393959 kubelet[2723]: E0916 04:54:43.393890 2723 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 16 04:54:43.394146 kubelet[2723]: E0916 04:54:43.394036 2723 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4d13110-3883-483e-8a1c-0994abd1c9bd-typha-certs podName:b4d13110-3883-483e-8a1c-0994abd1c9bd nodeName:}" failed. No retries permitted until 2025-09-16 04:54:43.893993993 +0000 UTC m=+21.837809401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/b4d13110-3883-483e-8a1c-0994abd1c9bd-typha-certs") pod "calico-typha-cc48b54bc-6s459" (UID: "b4d13110-3883-483e-8a1c-0994abd1c9bd") : failed to sync secret cache: timed out waiting for the condition Sep 16 04:54:43.412247 kubelet[2723]: E0916 04:54:43.411304 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.412247 kubelet[2723]: W0916 04:54:43.411335 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.412247 kubelet[2723]: E0916 04:54:43.411363 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.512490 kubelet[2723]: E0916 04:54:43.512382 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.512490 kubelet[2723]: W0916 04:54:43.512412 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.512490 kubelet[2723]: E0916 04:54:43.512436 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.613458 kubelet[2723]: E0916 04:54:43.613347 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.613458 kubelet[2723]: W0916 04:54:43.613378 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.613458 kubelet[2723]: E0916 04:54:43.613405 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.714050 kubelet[2723]: E0916 04:54:43.714004 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.714050 kubelet[2723]: W0916 04:54:43.714032 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.714050 kubelet[2723]: E0916 04:54:43.714056 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.815013 kubelet[2723]: E0916 04:54:43.814885 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.815013 kubelet[2723]: W0916 04:54:43.814913 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.815013 kubelet[2723]: E0916 04:54:43.814937 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.915839 kubelet[2723]: E0916 04:54:43.915783 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.915839 kubelet[2723]: W0916 04:54:43.915816 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.915839 kubelet[2723]: E0916 04:54:43.915848 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.916294 kubelet[2723]: E0916 04:54:43.916272 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.916294 kubelet[2723]: W0916 04:54:43.916289 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.916372 kubelet[2723]: E0916 04:54:43.916302 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.916550 kubelet[2723]: E0916 04:54:43.916518 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.916550 kubelet[2723]: W0916 04:54:43.916534 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.916550 kubelet[2723]: E0916 04:54:43.916546 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.917129 kubelet[2723]: E0916 04:54:43.917102 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.917129 kubelet[2723]: W0916 04:54:43.917120 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.917242 kubelet[2723]: E0916 04:54:43.917146 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.917485 kubelet[2723]: E0916 04:54:43.917469 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.917524 kubelet[2723]: W0916 04:54:43.917484 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.917524 kubelet[2723]: E0916 04:54:43.917497 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:43.921473 kubelet[2723]: E0916 04:54:43.921451 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:54:43.921473 kubelet[2723]: W0916 04:54:43.921470 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:54:43.921574 kubelet[2723]: E0916 04:54:43.921494 2723 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:54:44.033677 kubelet[2723]: E0916 04:54:44.033606 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:54:44.034383 containerd[1572]: time="2025-09-16T04:54:44.034331813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc48b54bc-6s459,Uid:b4d13110-3883-483e-8a1c-0994abd1c9bd,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:44.067539 containerd[1572]: time="2025-09-16T04:54:44.067222536Z" level=info msg="connecting to shim 2830aec116b1c956fe3e1c8677608f72e0c5ea4c44fe76f257e9d3f623a1b735" address="unix:///run/containerd/s/0ccd56f9b023402cc5334e725215880fc2c063e80cffb27479bea7216c7e3efb" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:54:44.105061 systemd[1]: Started cri-containerd-2830aec116b1c956fe3e1c8677608f72e0c5ea4c44fe76f257e9d3f623a1b735.scope - libcontainer container 2830aec116b1c956fe3e1c8677608f72e0c5ea4c44fe76f257e9d3f623a1b735. Sep 16 04:54:44.158924 containerd[1572]: time="2025-09-16T04:54:44.158845323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc48b54bc-6s459,Uid:b4d13110-3883-483e-8a1c-0994abd1c9bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"2830aec116b1c956fe3e1c8677608f72e0c5ea4c44fe76f257e9d3f623a1b735\"" Sep 16 04:54:44.159648 kubelet[2723]: E0916 04:54:44.159622 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:54:44.176654 kubelet[2723]: E0916 04:54:44.176576 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:44.748873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718630326.mount: Deactivated successfully. Sep 16 04:54:44.822451 containerd[1572]: time="2025-09-16T04:54:44.822378604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:44.823208 containerd[1572]: time="2025-09-16T04:54:44.823151541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 16 04:54:44.824309 containerd[1572]: time="2025-09-16T04:54:44.824271723Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:44.826295 containerd[1572]: time="2025-09-16T04:54:44.826264752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:44.827141 containerd[1572]: time="2025-09-16T04:54:44.827093725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.790611815s" Sep 16 04:54:44.827141 containerd[1572]: time="2025-09-16T04:54:44.827132278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 16 04:54:44.828790 containerd[1572]: time="2025-09-16T04:54:44.828716245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:54:44.829401 containerd[1572]: time="2025-09-16T04:54:44.829312169Z" level=info msg="CreateContainer within sandbox \"144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:54:44.841030 containerd[1572]: time="2025-09-16T04:54:44.840969090Z" level=info msg="Container f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:44.852628 containerd[1572]: time="2025-09-16T04:54:44.852557913Z" level=info msg="CreateContainer within sandbox \"144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478\"" Sep 16 04:54:44.853295 containerd[1572]: time="2025-09-16T04:54:44.853246362Z" level=info msg="StartContainer for \"f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478\"" Sep 16 04:54:44.854553 containerd[1572]: time="2025-09-16T04:54:44.854518792Z" level=info msg="connecting to shim f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478" address="unix:///run/containerd/s/9b2e8fbebef13517ce967d710b5fdd8ea0e92dbe3aaabedf9ad25547a3f2b4f5" protocol=ttrpc version=3 Sep 16 04:54:44.890109 systemd[1]: Started cri-containerd-f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478.scope - libcontainer container f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478. Sep 16 04:54:44.939277 containerd[1572]: time="2025-09-16T04:54:44.939228332Z" level=info msg="StartContainer for \"f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478\" returns successfully" Sep 16 04:54:44.950728 systemd[1]: cri-containerd-f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478.scope: Deactivated successfully. Sep 16 04:54:44.952872 containerd[1572]: time="2025-09-16T04:54:44.952809692Z" level=info msg="received exit event container_id:\"f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478\" id:\"f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478\" pid:3357 exited_at:{seconds:1757998484 nanos:952149427}" Sep 16 04:54:44.953119 containerd[1572]: time="2025-09-16T04:54:44.953079491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478\" id:\"f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478\" pid:3357 exited_at:{seconds:1757998484 nanos:952149427}" Sep 16 04:54:45.725666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f69ba706e51c43563863a0d482fa4732bdb89ab2bf69561205bc4158c23ea478-rootfs.mount: Deactivated successfully. Sep 16 04:54:46.176160 kubelet[2723]: E0916 04:54:46.176079 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:48.176512 kubelet[2723]: E0916 04:54:48.176455 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:48.409898 containerd[1572]: time="2025-09-16T04:54:48.409800272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:48.410591 containerd[1572]: time="2025-09-16T04:54:48.410559472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 16 04:54:48.411850 containerd[1572]: time="2025-09-16T04:54:48.411786413Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:48.414441 containerd[1572]: time="2025-09-16T04:54:48.414170573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:48.414947 containerd[1572]: time="2025-09-16T04:54:48.414916357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.586153144s" Sep 16 04:54:48.415002 containerd[1572]: time="2025-09-16T04:54:48.414950662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 16 04:54:48.415947 containerd[1572]: time="2025-09-16T04:54:48.415915880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:54:48.423836 containerd[1572]: time="2025-09-16T04:54:48.423786212Z" level=info msg="CreateContainer within sandbox \"2830aec116b1c956fe3e1c8677608f72e0c5ea4c44fe76f257e9d3f623a1b735\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:54:48.432607 containerd[1572]: time="2025-09-16T04:54:48.432502949Z" level=info msg="Container fc5fe0f80b80fd8968f30673c45389d66b6bdc8941dc827ca334f8248e1e44ae: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:48.441261 containerd[1572]: time="2025-09-16T04:54:48.441211200Z" level=info msg="CreateContainer within sandbox \"2830aec116b1c956fe3e1c8677608f72e0c5ea4c44fe76f257e9d3f623a1b735\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fc5fe0f80b80fd8968f30673c45389d66b6bdc8941dc827ca334f8248e1e44ae\"" Sep 16 04:54:48.441746 containerd[1572]: time="2025-09-16T04:54:48.441708537Z" level=info msg="StartContainer for \"fc5fe0f80b80fd8968f30673c45389d66b6bdc8941dc827ca334f8248e1e44ae\"" Sep 16 04:54:48.442732 containerd[1572]: time="2025-09-16T04:54:48.442694474Z" level=info msg="connecting to shim fc5fe0f80b80fd8968f30673c45389d66b6bdc8941dc827ca334f8248e1e44ae" address="unix:///run/containerd/s/0ccd56f9b023402cc5334e725215880fc2c063e80cffb27479bea7216c7e3efb" protocol=ttrpc version=3 Sep 16 04:54:48.469150 systemd[1]: Started cri-containerd-fc5fe0f80b80fd8968f30673c45389d66b6bdc8941dc827ca334f8248e1e44ae.scope - libcontainer container fc5fe0f80b80fd8968f30673c45389d66b6bdc8941dc827ca334f8248e1e44ae. Sep 16 04:54:48.528540 containerd[1572]: time="2025-09-16T04:54:48.528485641Z" level=info msg="StartContainer for \"fc5fe0f80b80fd8968f30673c45389d66b6bdc8941dc827ca334f8248e1e44ae\" returns successfully" Sep 16 04:54:49.492122 kubelet[2723]: E0916 04:54:49.492054 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:54:50.175949 kubelet[2723]: E0916 04:54:50.175845 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:50.494207 kubelet[2723]: I0916 04:54:50.493766 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:54:50.495614 kubelet[2723]: E0916 04:54:50.494777 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:54:52.177068 kubelet[2723]: E0916 04:54:52.176997 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:52.865071 containerd[1572]: time="2025-09-16T04:54:52.864992194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:52.866208 containerd[1572]: time="2025-09-16T04:54:52.866182775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 16 04:54:52.867487 containerd[1572]: time="2025-09-16T04:54:52.867437074Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:52.870598 containerd[1572]: time="2025-09-16T04:54:52.870260407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:54:52.871136 containerd[1572]: time="2025-09-16T04:54:52.871105477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.455150324s" Sep 16 04:54:52.871206 containerd[1572]: time="2025-09-16T04:54:52.871145732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 16 04:54:52.873553 containerd[1572]: time="2025-09-16T04:54:52.873514520Z" level=info msg="CreateContainer within sandbox \"144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:54:52.887927 containerd[1572]: time="2025-09-16T04:54:52.887848490Z" level=info msg="Container 0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:54:52.900437 containerd[1572]: time="2025-09-16T04:54:52.900362576Z" level=info msg="CreateContainer within sandbox \"144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75\"" Sep 16 04:54:52.901129 containerd[1572]: time="2025-09-16T04:54:52.900950942Z" level=info msg="StartContainer for \"0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75\"" Sep 16 04:54:52.903451 containerd[1572]: time="2025-09-16T04:54:52.903412394Z" level=info msg="connecting to shim 0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75" address="unix:///run/containerd/s/9b2e8fbebef13517ce967d710b5fdd8ea0e92dbe3aaabedf9ad25547a3f2b4f5" protocol=ttrpc version=3 Sep 16 04:54:52.940063 systemd[1]: Started cri-containerd-0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75.scope - libcontainer container 0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75. Sep 16 04:54:53.098915 containerd[1572]: time="2025-09-16T04:54:53.097070023Z" level=info msg="StartContainer for \"0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75\" returns successfully" Sep 16 04:54:53.587801 kubelet[2723]: I0916 04:54:53.587540 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cc48b54bc-6s459" podStartSLOduration=7.332144982 podStartE2EDuration="11.587506365s" podCreationTimestamp="2025-09-16 04:54:42 +0000 UTC" firstStartedPulling="2025-09-16 04:54:44.160318883 +0000 UTC m=+22.104134291" lastFinishedPulling="2025-09-16 04:54:48.415680266 +0000 UTC m=+26.359495674" observedRunningTime="2025-09-16 04:54:49.507793202 +0000 UTC m=+27.451608620" watchObservedRunningTime="2025-09-16 04:54:53.587506365 +0000 UTC m=+31.531321773" Sep 16 04:54:54.175877 kubelet[2723]: E0916 04:54:54.175791 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:54.452186 containerd[1572]: time="2025-09-16T04:54:54.452002792Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:54:54.455139 systemd[1]: cri-containerd-0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75.scope: Deactivated successfully. Sep 16 04:54:54.456011 systemd[1]: cri-containerd-0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75.scope: Consumed 700ms CPU time, 180M memory peak, 3.5M read from disk, 171.3M written to disk. Sep 16 04:54:54.456736 containerd[1572]: time="2025-09-16T04:54:54.456623853Z" level=info msg="received exit event container_id:\"0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75\" id:\"0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75\" pid:3462 exited_at:{seconds:1757998494 nanos:456047058}" Sep 16 04:54:54.456829 containerd[1572]: time="2025-09-16T04:54:54.456658167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75\" id:\"0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75\" pid:3462 exited_at:{seconds:1757998494 nanos:456047058}" Sep 16 04:54:54.481729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d6f96ca0a17a64041341db794118a68bbdd90e85a225d6fbaba9ebb1b336e75-rootfs.mount: Deactivated successfully. Sep 16 04:54:54.514854 kubelet[2723]: I0916 04:54:54.514812 2723 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 16 04:54:54.638767 systemd[1]: Created slice kubepods-besteffort-podf6423791_ce3c_46c6_8889_ef61fdc144e3.slice - libcontainer container kubepods-besteffort-podf6423791_ce3c_46c6_8889_ef61fdc144e3.slice. Sep 16 04:54:54.649922 systemd[1]: Created slice kubepods-besteffort-pod915dada8_92c9_4b88_a98c_f7a4c350a33e.slice - libcontainer container kubepods-besteffort-pod915dada8_92c9_4b88_a98c_f7a4c350a33e.slice. Sep 16 04:54:54.661762 systemd[1]: Created slice kubepods-burstable-podf70d36eb_9570_4d30_8841_f82e86edba4d.slice - libcontainer container kubepods-burstable-podf70d36eb_9570_4d30_8841_f82e86edba4d.slice. Sep 16 04:54:54.674379 systemd[1]: Created slice kubepods-besteffort-podfa078a04_b047_40fe_8fe4_43f64c977de0.slice - libcontainer container kubepods-besteffort-podfa078a04_b047_40fe_8fe4_43f64c977de0.slice. Sep 16 04:54:54.683467 systemd[1]: Created slice kubepods-burstable-pod391eb9f6_f5e6_432e_af20_4ec3ed2ffcf3.slice - libcontainer container kubepods-burstable-pod391eb9f6_f5e6_432e_af20_4ec3ed2ffcf3.slice. Sep 16 04:54:54.684077 kubelet[2723]: I0916 04:54:54.684016 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt8b5\" (UniqueName: \"kubernetes.io/projected/151ed0ba-9ed3-49b1-b342-5dcd09107105-kube-api-access-bt8b5\") pod \"goldmane-7988f88666-clthl\" (UID: \"151ed0ba-9ed3-49b1-b342-5dcd09107105\") " pod="calico-system/goldmane-7988f88666-clthl" Sep 16 04:54:54.684077 kubelet[2723]: I0916 04:54:54.684055 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/151ed0ba-9ed3-49b1-b342-5dcd09107105-goldmane-ca-bundle\") pod \"goldmane-7988f88666-clthl\" (UID: \"151ed0ba-9ed3-49b1-b342-5dcd09107105\") " pod="calico-system/goldmane-7988f88666-clthl" Sep 16 04:54:54.684077 kubelet[2723]: I0916 04:54:54.684075 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3-config-volume\") pod \"coredns-7c65d6cfc9-zvvbz\" (UID: \"391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3\") " pod="kube-system/coredns-7c65d6cfc9-zvvbz" Sep 16 04:54:54.684740 kubelet[2723]: I0916 04:54:54.684113 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-backend-key-pair\") pod \"whisker-857f5b5944-6nzg8\" (UID: \"915dada8-92c9-4b88-a98c-f7a4c350a33e\") " pod="calico-system/whisker-857f5b5944-6nzg8" Sep 16 04:54:54.684740 kubelet[2723]: I0916 04:54:54.684180 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27q79\" (UniqueName: \"kubernetes.io/projected/391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3-kube-api-access-27q79\") pod \"coredns-7c65d6cfc9-zvvbz\" (UID: \"391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3\") " pod="kube-system/coredns-7c65d6cfc9-zvvbz" Sep 16 04:54:54.684740 kubelet[2723]: I0916 04:54:54.684204 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-ca-bundle\") pod \"whisker-857f5b5944-6nzg8\" (UID: \"915dada8-92c9-4b88-a98c-f7a4c350a33e\") " pod="calico-system/whisker-857f5b5944-6nzg8" Sep 16 04:54:54.684740 kubelet[2723]: I0916 04:54:54.684222 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb264\" (UniqueName: \"kubernetes.io/projected/915dada8-92c9-4b88-a98c-f7a4c350a33e-kube-api-access-nb264\") pod \"whisker-857f5b5944-6nzg8\" (UID: \"915dada8-92c9-4b88-a98c-f7a4c350a33e\") " pod="calico-system/whisker-857f5b5944-6nzg8" Sep 16 04:54:54.684740 kubelet[2723]: I0916 04:54:54.684257 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa078a04-b047-40fe-8fe4-43f64c977de0-tigera-ca-bundle\") pod \"calico-kube-controllers-76b9fb9cbc-gqtzj\" (UID: \"fa078a04-b047-40fe-8fe4-43f64c977de0\") " pod="calico-system/calico-kube-controllers-76b9fb9cbc-gqtzj" Sep 16 04:54:54.684884 kubelet[2723]: I0916 04:54:54.684278 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/151ed0ba-9ed3-49b1-b342-5dcd09107105-goldmane-key-pair\") pod \"goldmane-7988f88666-clthl\" (UID: \"151ed0ba-9ed3-49b1-b342-5dcd09107105\") " pod="calico-system/goldmane-7988f88666-clthl" Sep 16 04:54:54.684884 kubelet[2723]: I0916 04:54:54.684311 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5g7\" (UniqueName: \"kubernetes.io/projected/fa078a04-b047-40fe-8fe4-43f64c977de0-kube-api-access-th5g7\") pod \"calico-kube-controllers-76b9fb9cbc-gqtzj\" (UID: \"fa078a04-b047-40fe-8fe4-43f64c977de0\") " pod="calico-system/calico-kube-controllers-76b9fb9cbc-gqtzj" Sep 16 04:54:54.684884 kubelet[2723]: I0916 04:54:54.684337 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d979d\" (UniqueName: \"kubernetes.io/projected/f70d36eb-9570-4d30-8841-f82e86edba4d-kube-api-access-d979d\") pod \"coredns-7c65d6cfc9-w9xwf\" (UID: \"f70d36eb-9570-4d30-8841-f82e86edba4d\") " pod="kube-system/coredns-7c65d6cfc9-w9xwf" Sep 16 04:54:54.684884 kubelet[2723]: I0916 04:54:54.684354 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fcffb6e8-9720-44b6-b6de-3dc13e3c8432-calico-apiserver-certs\") pod \"calico-apiserver-7f449fd945-sk5l6\" (UID: \"fcffb6e8-9720-44b6-b6de-3dc13e3c8432\") " pod="calico-apiserver/calico-apiserver-7f449fd945-sk5l6" Sep 16 04:54:54.684884 kubelet[2723]: I0916 04:54:54.684387 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6423791-ce3c-46c6-8889-ef61fdc144e3-calico-apiserver-certs\") pod \"calico-apiserver-7f449fd945-8b749\" (UID: \"f6423791-ce3c-46c6-8889-ef61fdc144e3\") " pod="calico-apiserver/calico-apiserver-7f449fd945-8b749" Sep 16 04:54:54.685011 kubelet[2723]: I0916 04:54:54.684408 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/151ed0ba-9ed3-49b1-b342-5dcd09107105-config\") pod \"goldmane-7988f88666-clthl\" (UID: \"151ed0ba-9ed3-49b1-b342-5dcd09107105\") " pod="calico-system/goldmane-7988f88666-clthl" Sep 16 04:54:54.685011 kubelet[2723]: I0916 04:54:54.684431 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8np\" (UniqueName: \"kubernetes.io/projected/f6423791-ce3c-46c6-8889-ef61fdc144e3-kube-api-access-rl8np\") pod \"calico-apiserver-7f449fd945-8b749\" (UID: \"f6423791-ce3c-46c6-8889-ef61fdc144e3\") " pod="calico-apiserver/calico-apiserver-7f449fd945-8b749" Sep 16 04:54:54.685011 kubelet[2723]: I0916 04:54:54.684449 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2jr2\" (UniqueName: \"kubernetes.io/projected/fcffb6e8-9720-44b6-b6de-3dc13e3c8432-kube-api-access-s2jr2\") pod \"calico-apiserver-7f449fd945-sk5l6\" (UID: \"fcffb6e8-9720-44b6-b6de-3dc13e3c8432\") " pod="calico-apiserver/calico-apiserver-7f449fd945-sk5l6" Sep 16 04:54:54.685011 kubelet[2723]: I0916 04:54:54.684472 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f70d36eb-9570-4d30-8841-f82e86edba4d-config-volume\") pod \"coredns-7c65d6cfc9-w9xwf\" (UID: \"f70d36eb-9570-4d30-8841-f82e86edba4d\") " pod="kube-system/coredns-7c65d6cfc9-w9xwf" Sep 16 04:54:54.693334 systemd[1]: Created slice kubepods-besteffort-pod151ed0ba_9ed3_49b1_b342_5dcd09107105.slice - libcontainer container kubepods-besteffort-pod151ed0ba_9ed3_49b1_b342_5dcd09107105.slice. Sep 16 04:54:54.702249 systemd[1]: Created slice kubepods-besteffort-podfcffb6e8_9720_44b6_b6de_3dc13e3c8432.slice - libcontainer container kubepods-besteffort-podfcffb6e8_9720_44b6_b6de_3dc13e3c8432.slice. Sep 16 04:54:54.946255 containerd[1572]: time="2025-09-16T04:54:54.946196361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-8b749,Uid:f6423791-ce3c-46c6-8889-ef61fdc144e3,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:54:54.955754 containerd[1572]: time="2025-09-16T04:54:54.955290465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-857f5b5944-6nzg8,Uid:915dada8-92c9-4b88-a98c-f7a4c350a33e,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:54.968806 kubelet[2723]: E0916 04:54:54.968743 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:54:54.975051 containerd[1572]: time="2025-09-16T04:54:54.971838579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9xwf,Uid:f70d36eb-9570-4d30-8841-f82e86edba4d,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:54.980722 containerd[1572]: time="2025-09-16T04:54:54.980663877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b9fb9cbc-gqtzj,Uid:fa078a04-b047-40fe-8fe4-43f64c977de0,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:54.989008 kubelet[2723]: E0916 04:54:54.988947 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:54:54.990658 containerd[1572]: time="2025-09-16T04:54:54.990613111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvvbz,Uid:391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3,Namespace:kube-system,Attempt:0,}" Sep 16 04:54:55.003335 containerd[1572]: time="2025-09-16T04:54:55.003288309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-clthl,Uid:151ed0ba-9ed3-49b1-b342-5dcd09107105,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:55.009136 containerd[1572]: time="2025-09-16T04:54:55.007758846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-sk5l6,Uid:fcffb6e8-9720-44b6-b6de-3dc13e3c8432,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:54:55.142579 containerd[1572]: time="2025-09-16T04:54:55.142412647Z" level=error msg="Failed to destroy network for sandbox \"9c88402159eb8e7e1b3b61aa7677fdf200d6a7cef5eb37a2ac82cb97afeebebc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.152169 containerd[1572]: time="2025-09-16T04:54:55.152050351Z" level=error msg="Failed to destroy network for sandbox \"047db871c1a6eaf56f2c065f505bf0521130f35714a7f632499d154e34d1ca43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.165233 containerd[1572]: time="2025-09-16T04:54:55.165155905Z" level=error msg="Failed to destroy network for sandbox \"0be698eaf1447d23352b92a55c014d9ea63d47cb1dd4caf7fbc125d3ae32f6e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.165395 containerd[1572]: time="2025-09-16T04:54:55.165319232Z" level=error msg="Failed to destroy network for sandbox \"5feada3c6d8fe7d60efa2f2033f94a5f5c7fa4d4045c20c2357b97ba472af7be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.166187 containerd[1572]: time="2025-09-16T04:54:55.166115940Z" level=error msg="Failed to destroy network for sandbox \"c6daddbf57557ea07e9b8c6158da8127924d7b0c94ac9976eff72dd7e256a830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.188323 containerd[1572]: time="2025-09-16T04:54:55.178100417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-857f5b5944-6nzg8,Uid:915dada8-92c9-4b88-a98c-f7a4c350a33e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0be698eaf1447d23352b92a55c014d9ea63d47cb1dd4caf7fbc125d3ae32f6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.188559 containerd[1572]: time="2025-09-16T04:54:55.178163254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9xwf,Uid:f70d36eb-9570-4d30-8841-f82e86edba4d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5feada3c6d8fe7d60efa2f2033f94a5f5c7fa4d4045c20c2357b97ba472af7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.188559 containerd[1572]: time="2025-09-16T04:54:55.178161561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-8b749,Uid:f6423791-ce3c-46c6-8889-ef61fdc144e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c88402159eb8e7e1b3b61aa7677fdf200d6a7cef5eb37a2ac82cb97afeebebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.188559 containerd[1572]: time="2025-09-16T04:54:55.178172943Z" level=error msg="Failed to destroy network for sandbox \"f0bd6a6e26a34819875ccd7f89b6951550851afc87311defcd13566977ea0225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.188737 containerd[1572]: time="2025-09-16T04:54:55.178183683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-clthl,Uid:151ed0ba-9ed3-49b1-b342-5dcd09107105,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"047db871c1a6eaf56f2c065f505bf0521130f35714a7f632499d154e34d1ca43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.188737 containerd[1572]: time="2025-09-16T04:54:55.178242474Z" level=error msg="Failed to destroy network for sandbox \"510ea7915b909f20d9d243c846a6587d2cec2512a109f8c7607158344d9cd6f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.188737 containerd[1572]: time="2025-09-16T04:54:55.179475242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-sk5l6,Uid:fcffb6e8-9720-44b6-b6de-3dc13e3c8432,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6daddbf57557ea07e9b8c6158da8127924d7b0c94ac9976eff72dd7e256a830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.190419 containerd[1572]: time="2025-09-16T04:54:55.190268197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvvbz,Uid:391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0bd6a6e26a34819875ccd7f89b6951550851afc87311defcd13566977ea0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.191533 containerd[1572]: time="2025-09-16T04:54:55.191445271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b9fb9cbc-gqtzj,Uid:fa078a04-b047-40fe-8fe4-43f64c977de0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"510ea7915b909f20d9d243c846a6587d2cec2512a109f8c7607158344d9cd6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195333 kubelet[2723]: E0916 04:54:55.195116 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c88402159eb8e7e1b3b61aa7677fdf200d6a7cef5eb37a2ac82cb97afeebebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195333 kubelet[2723]: E0916 04:54:55.195099 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5feada3c6d8fe7d60efa2f2033f94a5f5c7fa4d4045c20c2357b97ba472af7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195333 kubelet[2723]: E0916 04:54:55.195224 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6daddbf57557ea07e9b8c6158da8127924d7b0c94ac9976eff72dd7e256a830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195333 kubelet[2723]: E0916 04:54:55.195233 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0bd6a6e26a34819875ccd7f89b6951550851afc87311defcd13566977ea0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195333 kubelet[2723]: E0916 04:54:55.195217 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047db871c1a6eaf56f2c065f505bf0521130f35714a7f632499d154e34d1ca43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195634 kubelet[2723]: E0916 04:54:55.195293 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6daddbf57557ea07e9b8c6158da8127924d7b0c94ac9976eff72dd7e256a830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f449fd945-sk5l6" Sep 16 04:54:55.195634 kubelet[2723]: E0916 04:54:55.195104 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"510ea7915b909f20d9d243c846a6587d2cec2512a109f8c7607158344d9cd6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195727 kubelet[2723]: E0916 04:54:55.195697 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6daddbf57557ea07e9b8c6158da8127924d7b0c94ac9976eff72dd7e256a830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f449fd945-sk5l6" Sep 16 04:54:55.195779 kubelet[2723]: E0916 04:54:55.195748 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"510ea7915b909f20d9d243c846a6587d2cec2512a109f8c7607158344d9cd6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b9fb9cbc-gqtzj" Sep 16 04:54:55.195810 kubelet[2723]: E0916 04:54:55.195783 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"510ea7915b909f20d9d243c846a6587d2cec2512a109f8c7607158344d9cd6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b9fb9cbc-gqtzj" Sep 16 04:54:55.195810 kubelet[2723]: E0916 04:54:55.195781 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f449fd945-sk5l6_calico-apiserver(fcffb6e8-9720-44b6-b6de-3dc13e3c8432)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f449fd945-sk5l6_calico-apiserver(fcffb6e8-9720-44b6-b6de-3dc13e3c8432)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6daddbf57557ea07e9b8c6158da8127924d7b0c94ac9976eff72dd7e256a830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f449fd945-sk5l6" podUID="fcffb6e8-9720-44b6-b6de-3dc13e3c8432" Sep 16 04:54:55.195909 kubelet[2723]: E0916 04:54:55.195844 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76b9fb9cbc-gqtzj_calico-system(fa078a04-b047-40fe-8fe4-43f64c977de0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76b9fb9cbc-gqtzj_calico-system(fa078a04-b047-40fe-8fe4-43f64c977de0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"510ea7915b909f20d9d243c846a6587d2cec2512a109f8c7607158344d9cd6f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76b9fb9cbc-gqtzj" podUID="fa078a04-b047-40fe-8fe4-43f64c977de0" Sep 16 04:54:55.195909 kubelet[2723]: E0916 04:54:55.195100 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0be698eaf1447d23352b92a55c014d9ea63d47cb1dd4caf7fbc125d3ae32f6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:55.195909 kubelet[2723]: E0916 04:54:55.195902 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0be698eaf1447d23352b92a55c014d9ea63d47cb1dd4caf7fbc125d3ae32f6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-857f5b5944-6nzg8" Sep 16 04:54:55.196010 kubelet[2723]: E0916 04:54:55.195919 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0be698eaf1447d23352b92a55c014d9ea63d47cb1dd4caf7fbc125d3ae32f6e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-857f5b5944-6nzg8" Sep 16 04:54:55.196010 kubelet[2723]: E0916 04:54:55.195950 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-857f5b5944-6nzg8_calico-system(915dada8-92c9-4b88-a98c-f7a4c350a33e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-857f5b5944-6nzg8_calico-system(915dada8-92c9-4b88-a98c-f7a4c350a33e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0be698eaf1447d23352b92a55c014d9ea63d47cb1dd4caf7fbc125d3ae32f6e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-857f5b5944-6nzg8" podUID="915dada8-92c9-4b88-a98c-f7a4c350a33e" Sep 16 04:54:55.196010 kubelet[2723]: E0916 04:54:55.195299 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047db871c1a6eaf56f2c065f505bf0521130f35714a7f632499d154e34d1ca43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-clthl" Sep 16 04:54:55.196356 kubelet[2723]: E0916 04:54:55.195977 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"047db871c1a6eaf56f2c065f505bf0521130f35714a7f632499d154e34d1ca43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-clthl" Sep 16 04:54:55.196356 kubelet[2723]: E0916 04:54:55.196003 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-clthl_calico-system(151ed0ba-9ed3-49b1-b342-5dcd09107105)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-clthl_calico-system(151ed0ba-9ed3-49b1-b342-5dcd09107105)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"047db871c1a6eaf56f2c065f505bf0521130f35714a7f632499d154e34d1ca43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-clthl" podUID="151ed0ba-9ed3-49b1-b342-5dcd09107105" Sep 16 04:54:55.196356 kubelet[2723]: E0916 04:54:55.196027 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c88402159eb8e7e1b3b61aa7677fdf200d6a7cef5eb37a2ac82cb97afeebebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f449fd945-8b749" Sep 16 04:54:55.196525 kubelet[2723]: E0916 04:54:55.195435 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0bd6a6e26a34819875ccd7f89b6951550851afc87311defcd13566977ea0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zvvbz" Sep 16 04:54:55.196525 kubelet[2723]: E0916 04:54:55.196046 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c88402159eb8e7e1b3b61aa7677fdf200d6a7cef5eb37a2ac82cb97afeebebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f449fd945-8b749" Sep 16 04:54:55.196525 kubelet[2723]: E0916 04:54:55.196076 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f449fd945-8b749_calico-apiserver(f6423791-ce3c-46c6-8889-ef61fdc144e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f449fd945-8b749_calico-apiserver(f6423791-ce3c-46c6-8889-ef61fdc144e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c88402159eb8e7e1b3b61aa7677fdf200d6a7cef5eb37a2ac82cb97afeebebc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f449fd945-8b749" podUID="f6423791-ce3c-46c6-8889-ef61fdc144e3" Sep 16 04:54:55.196647 kubelet[2723]: E0916 04:54:55.196064 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0bd6a6e26a34819875ccd7f89b6951550851afc87311defcd13566977ea0225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zvvbz" Sep 16 04:54:55.196647 kubelet[2723]: E0916 04:54:55.196122 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zvvbz_kube-system(391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zvvbz_kube-system(391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0bd6a6e26a34819875ccd7f89b6951550851afc87311defcd13566977ea0225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zvvbz" podUID="391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3" Sep 16 04:54:55.197799 kubelet[2723]: E0916 04:54:55.197737 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5feada3c6d8fe7d60efa2f2033f94a5f5c7fa4d4045c20c2357b97ba472af7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w9xwf" Sep 16 04:54:55.198892 kubelet[2723]: E0916 04:54:55.198006 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5feada3c6d8fe7d60efa2f2033f94a5f5c7fa4d4045c20c2357b97ba472af7be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-w9xwf" Sep 16 04:54:55.198892 kubelet[2723]: E0916 04:54:55.198070 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-w9xwf_kube-system(f70d36eb-9570-4d30-8841-f82e86edba4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-w9xwf_kube-system(f70d36eb-9570-4d30-8841-f82e86edba4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5feada3c6d8fe7d60efa2f2033f94a5f5c7fa4d4045c20c2357b97ba472af7be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-w9xwf" podUID="f70d36eb-9570-4d30-8841-f82e86edba4d" Sep 16 04:54:55.482532 systemd[1]: run-netns-cni\x2d83ada8e0\x2d504e\x2db600\x2d6510\x2d1a7969c4fc77.mount: Deactivated successfully. Sep 16 04:54:55.521377 containerd[1572]: time="2025-09-16T04:54:55.521308255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:54:56.183784 systemd[1]: Created slice kubepods-besteffort-pod6287dfa0_7875_4f8b_8630_23e2f6643cbc.slice - libcontainer container kubepods-besteffort-pod6287dfa0_7875_4f8b_8630_23e2f6643cbc.slice. Sep 16 04:54:56.186270 containerd[1572]: time="2025-09-16T04:54:56.186231710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ld4qv,Uid:6287dfa0-7875-4f8b-8630-23e2f6643cbc,Namespace:calico-system,Attempt:0,}" Sep 16 04:54:56.242276 containerd[1572]: time="2025-09-16T04:54:56.242215146Z" level=error msg="Failed to destroy network for sandbox \"18346dbca9d62e4d35934f77455f564780a8c1174de8a7520558d892a0daab18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:56.243935 containerd[1572]: time="2025-09-16T04:54:56.243898211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ld4qv,Uid:6287dfa0-7875-4f8b-8630-23e2f6643cbc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18346dbca9d62e4d35934f77455f564780a8c1174de8a7520558d892a0daab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:56.244190 kubelet[2723]: E0916 04:54:56.244153 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18346dbca9d62e4d35934f77455f564780a8c1174de8a7520558d892a0daab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:54:56.244548 kubelet[2723]: E0916 04:54:56.244222 2723 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18346dbca9d62e4d35934f77455f564780a8c1174de8a7520558d892a0daab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ld4qv" Sep 16 04:54:56.244548 kubelet[2723]: E0916 04:54:56.244243 2723 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18346dbca9d62e4d35934f77455f564780a8c1174de8a7520558d892a0daab18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ld4qv" Sep 16 04:54:56.244548 kubelet[2723]: E0916 04:54:56.244292 2723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ld4qv_calico-system(6287dfa0-7875-4f8b-8630-23e2f6643cbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ld4qv_calico-system(6287dfa0-7875-4f8b-8630-23e2f6643cbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18346dbca9d62e4d35934f77455f564780a8c1174de8a7520558d892a0daab18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ld4qv" podUID="6287dfa0-7875-4f8b-8630-23e2f6643cbc" Sep 16 04:54:56.244996 systemd[1]: run-netns-cni\x2d92bd1067\x2d0a9c\x2d03c4\x2d6ef8\x2d52f1430908e6.mount: Deactivated successfully. Sep 16 04:54:59.796066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556170884.mount: Deactivated successfully. Sep 16 04:55:01.553594 containerd[1572]: time="2025-09-16T04:55:01.553494129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:01.555017 containerd[1572]: time="2025-09-16T04:55:01.554979670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 16 04:55:01.556503 containerd[1572]: time="2025-09-16T04:55:01.556442048Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:01.559036 containerd[1572]: time="2025-09-16T04:55:01.558981649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:01.559806 containerd[1572]: time="2025-09-16T04:55:01.559740666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.03837887s" Sep 16 04:55:01.559806 containerd[1572]: time="2025-09-16T04:55:01.559802151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 16 04:55:01.573062 containerd[1572]: time="2025-09-16T04:55:01.573006470Z" level=info msg="CreateContainer within sandbox \"144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:55:01.626932 containerd[1572]: time="2025-09-16T04:55:01.626796801Z" level=info msg="Container e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:01.647033 containerd[1572]: time="2025-09-16T04:55:01.646960255Z" level=info msg="CreateContainer within sandbox \"144cfcef0a99b21f4156d17757dc03e62dfe5eb0b1d82fb1fb49d88a939ad9fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350\"" Sep 16 04:55:01.648006 containerd[1572]: time="2025-09-16T04:55:01.647975663Z" level=info msg="StartContainer for \"e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350\"" Sep 16 04:55:01.650504 containerd[1572]: time="2025-09-16T04:55:01.650445825Z" level=info msg="connecting to shim e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350" address="unix:///run/containerd/s/9b2e8fbebef13517ce967d710b5fdd8ea0e92dbe3aaabedf9ad25547a3f2b4f5" protocol=ttrpc version=3 Sep 16 04:55:01.688033 systemd[1]: Started cri-containerd-e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350.scope - libcontainer container e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350. Sep 16 04:55:02.052955 containerd[1572]: time="2025-09-16T04:55:02.052841933Z" level=info msg="StartContainer for \"e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350\" returns successfully" Sep 16 04:55:02.080808 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:55:02.081703 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:55:02.539884 kubelet[2723]: I0916 04:55:02.539770 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb264\" (UniqueName: \"kubernetes.io/projected/915dada8-92c9-4b88-a98c-f7a4c350a33e-kube-api-access-nb264\") pod \"915dada8-92c9-4b88-a98c-f7a4c350a33e\" (UID: \"915dada8-92c9-4b88-a98c-f7a4c350a33e\") " Sep 16 04:55:02.540913 kubelet[2723]: I0916 04:55:02.539910 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-backend-key-pair\") pod \"915dada8-92c9-4b88-a98c-f7a4c350a33e\" (UID: \"915dada8-92c9-4b88-a98c-f7a4c350a33e\") " Sep 16 04:55:02.540913 kubelet[2723]: I0916 04:55:02.539963 2723 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-ca-bundle\") pod \"915dada8-92c9-4b88-a98c-f7a4c350a33e\" (UID: \"915dada8-92c9-4b88-a98c-f7a4c350a33e\") " Sep 16 04:55:02.544023 kubelet[2723]: I0916 04:55:02.542974 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "915dada8-92c9-4b88-a98c-f7a4c350a33e" (UID: "915dada8-92c9-4b88-a98c-f7a4c350a33e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 16 04:55:02.555045 kubelet[2723]: I0916 04:55:02.554731 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915dada8-92c9-4b88-a98c-f7a4c350a33e-kube-api-access-nb264" (OuterVolumeSpecName: "kube-api-access-nb264") pod "915dada8-92c9-4b88-a98c-f7a4c350a33e" (UID: "915dada8-92c9-4b88-a98c-f7a4c350a33e"). InnerVolumeSpecName "kube-api-access-nb264". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 16 04:55:02.563089 kubelet[2723]: I0916 04:55:02.563015 2723 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "915dada8-92c9-4b88-a98c-f7a4c350a33e" (UID: "915dada8-92c9-4b88-a98c-f7a4c350a33e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 16 04:55:02.570601 systemd[1]: var-lib-kubelet-pods-915dada8\x2d92c9\x2d4b88\x2da98c\x2df7a4c350a33e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnb264.mount: Deactivated successfully. Sep 16 04:55:02.571036 systemd[1]: var-lib-kubelet-pods-915dada8\x2d92c9\x2d4b88\x2da98c\x2df7a4c350a33e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:55:02.599248 kubelet[2723]: I0916 04:55:02.597112 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bm9nh" podStartSLOduration=2.072283035 podStartE2EDuration="20.596924401s" podCreationTimestamp="2025-09-16 04:54:42 +0000 UTC" firstStartedPulling="2025-09-16 04:54:43.036066587 +0000 UTC m=+20.979881995" lastFinishedPulling="2025-09-16 04:55:01.560707953 +0000 UTC m=+39.504523361" observedRunningTime="2025-09-16 04:55:02.592435618 +0000 UTC m=+40.536251046" watchObservedRunningTime="2025-09-16 04:55:02.596924401 +0000 UTC m=+40.540739809" Sep 16 04:55:02.606148 kubelet[2723]: I0916 04:55:02.606079 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:55:02.609715 kubelet[2723]: E0916 04:55:02.609687 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:02.642889 kubelet[2723]: I0916 04:55:02.641259 2723 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb264\" (UniqueName: \"kubernetes.io/projected/915dada8-92c9-4b88-a98c-f7a4c350a33e-kube-api-access-nb264\") on node \"localhost\" DevicePath \"\"" Sep 16 04:55:02.642889 kubelet[2723]: I0916 04:55:02.641316 2723 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 16 04:55:02.642889 kubelet[2723]: I0916 04:55:02.641333 2723 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/915dada8-92c9-4b88-a98c-f7a4c350a33e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 16 04:55:02.801936 containerd[1572]: time="2025-09-16T04:55:02.801690445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350\" id:\"771da5c0209d16a9417db8b187ec7cda626d8d8fe8ef4206c12506e2d80f4634\" pid:3842 exit_status:1 exited_at:{seconds:1757998502 nanos:801122949}" Sep 16 04:55:02.867755 systemd[1]: Removed slice kubepods-besteffort-pod915dada8_92c9_4b88_a98c_f7a4c350a33e.slice - libcontainer container kubepods-besteffort-pod915dada8_92c9_4b88_a98c_f7a4c350a33e.slice. Sep 16 04:55:02.925582 kubelet[2723]: W0916 04:55:02.925525 2723 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 16 04:55:02.926322 kubelet[2723]: E0916 04:55:02.925602 2723 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 16 04:55:02.933708 systemd[1]: Created slice kubepods-besteffort-pod6945706e_6afb_4726_9693_b0e242fb8eca.slice - libcontainer container kubepods-besteffort-pod6945706e_6afb_4726_9693_b0e242fb8eca.slice. Sep 16 04:55:02.943961 kubelet[2723]: I0916 04:55:02.943884 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtgnc\" (UniqueName: \"kubernetes.io/projected/6945706e-6afb-4726-9693-b0e242fb8eca-kube-api-access-dtgnc\") pod \"whisker-544ccf8767-bksvm\" (UID: \"6945706e-6afb-4726-9693-b0e242fb8eca\") " pod="calico-system/whisker-544ccf8767-bksvm" Sep 16 04:55:02.943961 kubelet[2723]: I0916 04:55:02.943936 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6945706e-6afb-4726-9693-b0e242fb8eca-whisker-ca-bundle\") pod \"whisker-544ccf8767-bksvm\" (UID: \"6945706e-6afb-4726-9693-b0e242fb8eca\") " pod="calico-system/whisker-544ccf8767-bksvm" Sep 16 04:55:02.943961 kubelet[2723]: I0916 04:55:02.943979 2723 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6945706e-6afb-4726-9693-b0e242fb8eca-whisker-backend-key-pair\") pod \"whisker-544ccf8767-bksvm\" (UID: \"6945706e-6afb-4726-9693-b0e242fb8eca\") " pod="calico-system/whisker-544ccf8767-bksvm" Sep 16 04:55:03.552790 kubelet[2723]: E0916 04:55:03.552717 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:03.643572 containerd[1572]: time="2025-09-16T04:55:03.643295831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350\" id:\"f8007aee1929211110994310a5cdf16b645c077716d67f9e0c93e894052815f1\" pid:3880 exit_status:1 exited_at:{seconds:1757998503 nanos:642835816}" Sep 16 04:55:04.045409 kubelet[2723]: E0916 04:55:04.045355 2723 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:55:04.045593 kubelet[2723]: E0916 04:55:04.045483 2723 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6945706e-6afb-4726-9693-b0e242fb8eca-whisker-ca-bundle podName:6945706e-6afb-4726-9693-b0e242fb8eca nodeName:}" failed. No retries permitted until 2025-09-16 04:55:04.545452343 +0000 UTC m=+42.489267751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/6945706e-6afb-4726-9693-b0e242fb8eca-whisker-ca-bundle") pod "whisker-544ccf8767-bksvm" (UID: "6945706e-6afb-4726-9693-b0e242fb8eca") : failed to sync configmap cache: timed out waiting for the condition Sep 16 04:55:04.180967 kubelet[2723]: I0916 04:55:04.180903 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="915dada8-92c9-4b88-a98c-f7a4c350a33e" path="/var/lib/kubelet/pods/915dada8-92c9-4b88-a98c-f7a4c350a33e/volumes" Sep 16 04:55:04.326073 systemd-networkd[1494]: vxlan.calico: Link UP Sep 16 04:55:04.326088 systemd-networkd[1494]: vxlan.calico: Gained carrier Sep 16 04:55:04.740113 containerd[1572]: time="2025-09-16T04:55:04.739974383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544ccf8767-bksvm,Uid:6945706e-6afb-4726-9693-b0e242fb8eca,Namespace:calico-system,Attempt:0,}" Sep 16 04:55:04.904552 systemd-networkd[1494]: calife9afb5b738: Link UP Sep 16 04:55:04.904783 systemd-networkd[1494]: calife9afb5b738: Gained carrier Sep 16 04:55:04.922093 containerd[1572]: 2025-09-16 04:55:04.789 [INFO][4087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--544ccf8767--bksvm-eth0 whisker-544ccf8767- calico-system 6945706e-6afb-4726-9693-b0e242fb8eca 915 0 2025-09-16 04:55:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:544ccf8767 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-544ccf8767-bksvm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calife9afb5b738 [] [] }} ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-" Sep 16 04:55:04.922093 containerd[1572]: 2025-09-16 04:55:04.789 [INFO][4087] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-eth0" Sep 16 04:55:04.922093 containerd[1572]: 2025-09-16 04:55:04.855 [INFO][4101] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" HandleID="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Workload="localhost-k8s-whisker--544ccf8767--bksvm-eth0" Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.856 [INFO][4101] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" HandleID="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Workload="localhost-k8s-whisker--544ccf8767--bksvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b7dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-544ccf8767-bksvm", "timestamp":"2025-09-16 04:55:04.85544364 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.856 [INFO][4101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.856 [INFO][4101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.856 [INFO][4101] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.868 [INFO][4101] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" host="localhost" Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.874 [INFO][4101] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.879 [INFO][4101] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.881 [INFO][4101] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.884 [INFO][4101] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:04.922363 containerd[1572]: 2025-09-16 04:55:04.884 [INFO][4101] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" host="localhost" Sep 16 04:55:04.924041 containerd[1572]: 2025-09-16 04:55:04.886 [INFO][4101] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b Sep 16 04:55:04.924041 containerd[1572]: 2025-09-16 04:55:04.891 [INFO][4101] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" host="localhost" Sep 16 04:55:04.924041 containerd[1572]: 2025-09-16 04:55:04.897 [INFO][4101] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" host="localhost" Sep 16 04:55:04.924041 containerd[1572]: 2025-09-16 04:55:04.897 [INFO][4101] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" host="localhost" Sep 16 04:55:04.924041 containerd[1572]: 2025-09-16 04:55:04.897 [INFO][4101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:04.924041 containerd[1572]: 2025-09-16 04:55:04.897 [INFO][4101] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" HandleID="k8s-pod-network.12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Workload="localhost-k8s-whisker--544ccf8767--bksvm-eth0" Sep 16 04:55:04.924612 containerd[1572]: 2025-09-16 04:55:04.901 [INFO][4087] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--544ccf8767--bksvm-eth0", GenerateName:"whisker-544ccf8767-", Namespace:"calico-system", SelfLink:"", UID:"6945706e-6afb-4726-9693-b0e242fb8eca", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 55, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544ccf8767", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-544ccf8767-bksvm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife9afb5b738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:04.924612 containerd[1572]: 2025-09-16 04:55:04.901 [INFO][4087] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-eth0" Sep 16 04:55:04.924757 containerd[1572]: 2025-09-16 04:55:04.901 [INFO][4087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife9afb5b738 ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-eth0" Sep 16 04:55:04.924757 containerd[1572]: 2025-09-16 04:55:04.905 [INFO][4087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-eth0" Sep 16 04:55:04.924820 containerd[1572]: 2025-09-16 04:55:04.905 [INFO][4087] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--544ccf8767--bksvm-eth0", GenerateName:"whisker-544ccf8767-", Namespace:"calico-system", SelfLink:"", UID:"6945706e-6afb-4726-9693-b0e242fb8eca", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 55, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544ccf8767", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b", Pod:"whisker-544ccf8767-bksvm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife9afb5b738", MAC:"4a:c7:2e:41:de:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:04.924913 containerd[1572]: 2025-09-16 04:55:04.918 [INFO][4087] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" Namespace="calico-system" Pod="whisker-544ccf8767-bksvm" WorkloadEndpoint="localhost-k8s-whisker--544ccf8767--bksvm-eth0" Sep 16 04:55:04.986604 containerd[1572]: time="2025-09-16T04:55:04.986541680Z" level=info msg="connecting to shim 12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b" address="unix:///run/containerd/s/2246c6fd7baf3c108b17b51e6af6c1d3888ef73fae4ddb46ae5705417443d7b8" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:05.015988 systemd[1]: Started cri-containerd-12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b.scope - libcontainer container 12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b. Sep 16 04:55:05.029462 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:05.063543 containerd[1572]: time="2025-09-16T04:55:05.063497077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544ccf8767-bksvm,Uid:6945706e-6afb-4726-9693-b0e242fb8eca,Namespace:calico-system,Attempt:0,} returns sandbox id \"12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b\"" Sep 16 04:55:05.065196 containerd[1572]: time="2025-09-16T04:55:05.065168275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:55:06.050068 systemd-networkd[1494]: calife9afb5b738: Gained IPv6LL Sep 16 04:55:06.178082 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL Sep 16 04:55:07.176611 kubelet[2723]: E0916 04:55:07.176544 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:07.177100 kubelet[2723]: E0916 04:55:07.176709 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:07.177134 containerd[1572]: time="2025-09-16T04:55:07.177085253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9xwf,Uid:f70d36eb-9570-4d30-8841-f82e86edba4d,Namespace:kube-system,Attempt:0,}" Sep 16 04:55:07.177384 containerd[1572]: time="2025-09-16T04:55:07.177236777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvvbz,Uid:391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3,Namespace:kube-system,Attempt:0,}" Sep 16 04:55:07.225732 containerd[1572]: time="2025-09-16T04:55:07.225703834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:07.250398 containerd[1572]: time="2025-09-16T04:55:07.250344983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 16 04:55:07.256059 containerd[1572]: time="2025-09-16T04:55:07.255770210Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:07.259265 containerd[1572]: time="2025-09-16T04:55:07.259224356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:07.261042 containerd[1572]: time="2025-09-16T04:55:07.260738870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.195531561s" Sep 16 04:55:07.261042 containerd[1572]: time="2025-09-16T04:55:07.260767775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 16 04:55:07.265332 containerd[1572]: time="2025-09-16T04:55:07.265286079Z" level=info msg="CreateContainer within sandbox \"12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:55:07.317090 containerd[1572]: time="2025-09-16T04:55:07.317033055Z" level=info msg="Container 7dcb4f47768405dc32639982656b6dc29858dd9858410b8a3608cff77c6b76d4: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:07.331289 containerd[1572]: time="2025-09-16T04:55:07.331242556Z" level=info msg="CreateContainer within sandbox \"12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7dcb4f47768405dc32639982656b6dc29858dd9858410b8a3608cff77c6b76d4\"" Sep 16 04:55:07.338081 containerd[1572]: time="2025-09-16T04:55:07.338040771Z" level=info msg="StartContainer for \"7dcb4f47768405dc32639982656b6dc29858dd9858410b8a3608cff77c6b76d4\"" Sep 16 04:55:07.343286 containerd[1572]: time="2025-09-16T04:55:07.343229063Z" level=info msg="connecting to shim 7dcb4f47768405dc32639982656b6dc29858dd9858410b8a3608cff77c6b76d4" address="unix:///run/containerd/s/2246c6fd7baf3c108b17b51e6af6c1d3888ef73fae4ddb46ae5705417443d7b8" protocol=ttrpc version=3 Sep 16 04:55:07.370218 systemd[1]: Started cri-containerd-7dcb4f47768405dc32639982656b6dc29858dd9858410b8a3608cff77c6b76d4.scope - libcontainer container 7dcb4f47768405dc32639982656b6dc29858dd9858410b8a3608cff77c6b76d4. Sep 16 04:55:07.395069 systemd-networkd[1494]: calif86836789a9: Link UP Sep 16 04:55:07.396122 systemd-networkd[1494]: calif86836789a9: Gained carrier Sep 16 04:55:07.417316 containerd[1572]: 2025-09-16 04:55:07.309 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0 coredns-7c65d6cfc9- kube-system f70d36eb-9570-4d30-8841-f82e86edba4d 834 0 2025-09-16 04:54:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-w9xwf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif86836789a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-" Sep 16 04:55:07.417316 containerd[1572]: 2025-09-16 04:55:07.310 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" Sep 16 04:55:07.417316 containerd[1572]: 2025-09-16 04:55:07.348 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" HandleID="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Workload="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.348 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" HandleID="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Workload="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-w9xwf", "timestamp":"2025-09-16 04:55:07.348549814 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.348 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.348 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.348 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.356 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" host="localhost" Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.363 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.368 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.370 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.372 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:07.417642 containerd[1572]: 2025-09-16 04:55:07.372 [INFO][4207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" host="localhost" Sep 16 04:55:07.418297 containerd[1572]: 2025-09-16 04:55:07.374 [INFO][4207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62 Sep 16 04:55:07.418297 containerd[1572]: 2025-09-16 04:55:07.378 [INFO][4207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" host="localhost" Sep 16 04:55:07.418297 containerd[1572]: 2025-09-16 04:55:07.385 [INFO][4207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" host="localhost" Sep 16 04:55:07.418297 containerd[1572]: 2025-09-16 04:55:07.385 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" host="localhost" Sep 16 04:55:07.418297 containerd[1572]: 2025-09-16 04:55:07.385 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:07.418297 containerd[1572]: 2025-09-16 04:55:07.386 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" HandleID="k8s-pod-network.a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Workload="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" Sep 16 04:55:07.418439 containerd[1572]: 2025-09-16 04:55:07.390 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f70d36eb-9570-4d30-8841-f82e86edba4d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-w9xwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86836789a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:07.418513 containerd[1572]: 2025-09-16 04:55:07.390 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" Sep 16 04:55:07.418513 containerd[1572]: 2025-09-16 04:55:07.390 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif86836789a9 ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" Sep 16 04:55:07.418513 containerd[1572]: 2025-09-16 04:55:07.396 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" Sep 16 04:55:07.418588 containerd[1572]: 2025-09-16 04:55:07.398 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f70d36eb-9570-4d30-8841-f82e86edba4d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62", Pod:"coredns-7c65d6cfc9-w9xwf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif86836789a9", MAC:"06:16:d3:2b:5b:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:07.418588 containerd[1572]: 2025-09-16 04:55:07.411 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" Namespace="kube-system" Pod="coredns-7c65d6cfc9-w9xwf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--w9xwf-eth0" Sep 16 04:55:07.442410 containerd[1572]: time="2025-09-16T04:55:07.442282587Z" level=info msg="StartContainer for \"7dcb4f47768405dc32639982656b6dc29858dd9858410b8a3608cff77c6b76d4\" returns successfully" Sep 16 04:55:07.446458 containerd[1572]: time="2025-09-16T04:55:07.446412943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:55:07.456443 containerd[1572]: time="2025-09-16T04:55:07.456394925Z" level=info msg="connecting to shim a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62" address="unix:///run/containerd/s/8778adeb8fc9241626135c5d76d4f560b803ea99606e34909fc0cc0b48d6d859" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:07.494049 systemd[1]: Started cri-containerd-a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62.scope - libcontainer container a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62. Sep 16 04:55:07.497085 systemd-networkd[1494]: cali9689236aa6b: Link UP Sep 16 04:55:07.504338 systemd-networkd[1494]: cali9689236aa6b: Gained carrier Sep 16 04:55:07.511644 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.308 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0 coredns-7c65d6cfc9- kube-system 391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3 838 0 2025-09-16 04:54:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zvvbz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9689236aa6b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.308 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.352 [INFO][4205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" HandleID="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Workload="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.352 [INFO][4205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" HandleID="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Workload="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000129290), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zvvbz", "timestamp":"2025-09-16 04:55:07.35259478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.352 [INFO][4205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.385 [INFO][4205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.386 [INFO][4205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.458 [INFO][4205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.466 [INFO][4205] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.471 [INFO][4205] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.473 [INFO][4205] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.475 [INFO][4205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.475 [INFO][4205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.476 [INFO][4205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960 Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.480 [INFO][4205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.486 [INFO][4205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.486 [INFO][4205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" host="localhost" Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.486 [INFO][4205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:07.522206 containerd[1572]: 2025-09-16 04:55:07.486 [INFO][4205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" HandleID="k8s-pod-network.b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Workload="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" Sep 16 04:55:07.523131 containerd[1572]: 2025-09-16 04:55:07.492 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zvvbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9689236aa6b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:07.523131 containerd[1572]: 2025-09-16 04:55:07.492 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" Sep 16 04:55:07.523131 containerd[1572]: 2025-09-16 04:55:07.492 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9689236aa6b ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" Sep 16 04:55:07.523131 containerd[1572]: 2025-09-16 04:55:07.505 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" Sep 16 04:55:07.523131 containerd[1572]: 2025-09-16 04:55:07.506 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960", Pod:"coredns-7c65d6cfc9-zvvbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9689236aa6b", MAC:"ce:1a:30:f2:46:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:07.523131 containerd[1572]: 2025-09-16 04:55:07.518 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvvbz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvvbz-eth0" Sep 16 04:55:07.549726 containerd[1572]: time="2025-09-16T04:55:07.549670590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w9xwf,Uid:f70d36eb-9570-4d30-8841-f82e86edba4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62\"" Sep 16 04:55:07.550651 kubelet[2723]: E0916 04:55:07.550628 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:07.553291 containerd[1572]: time="2025-09-16T04:55:07.553180420Z" level=info msg="connecting to shim b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960" address="unix:///run/containerd/s/07acb25416a7203a1829e8dcc09534e1b6ccac9335674de905f410de781b6784" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:07.553673 containerd[1572]: time="2025-09-16T04:55:07.553644893Z" level=info msg="CreateContainer within sandbox \"a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:55:07.574745 containerd[1572]: time="2025-09-16T04:55:07.574693696Z" level=info msg="Container 867c4b1cf4c65d1e0bf1ee1983f5002968207b20edba4a53c57ac7c026ca3a9a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:07.581586 containerd[1572]: time="2025-09-16T04:55:07.581540973Z" level=info msg="CreateContainer within sandbox \"a05910e5384ab988291e6c9dc6716b6382e60220bcd081c44660c28d6a99fc62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"867c4b1cf4c65d1e0bf1ee1983f5002968207b20edba4a53c57ac7c026ca3a9a\"" Sep 16 04:55:07.582332 containerd[1572]: time="2025-09-16T04:55:07.582293957Z" level=info msg="StartContainer for \"867c4b1cf4c65d1e0bf1ee1983f5002968207b20edba4a53c57ac7c026ca3a9a\"" Sep 16 04:55:07.583099 containerd[1572]: time="2025-09-16T04:55:07.583072359Z" level=info msg="connecting to shim 867c4b1cf4c65d1e0bf1ee1983f5002968207b20edba4a53c57ac7c026ca3a9a" address="unix:///run/containerd/s/8778adeb8fc9241626135c5d76d4f560b803ea99606e34909fc0cc0b48d6d859" protocol=ttrpc version=3 Sep 16 04:55:07.585140 systemd[1]: Started cri-containerd-b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960.scope - libcontainer container b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960. Sep 16 04:55:07.605021 systemd[1]: Started cri-containerd-867c4b1cf4c65d1e0bf1ee1983f5002968207b20edba4a53c57ac7c026ca3a9a.scope - libcontainer container 867c4b1cf4c65d1e0bf1ee1983f5002968207b20edba4a53c57ac7c026ca3a9a. Sep 16 04:55:07.610079 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:07.643747 containerd[1572]: time="2025-09-16T04:55:07.643691484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvvbz,Uid:391eb9f6-f5e6-432e-af20-4ec3ed2ffcf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960\"" Sep 16 04:55:07.644968 kubelet[2723]: E0916 04:55:07.644714 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:07.647006 containerd[1572]: time="2025-09-16T04:55:07.646491301Z" level=info msg="CreateContainer within sandbox \"b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:55:07.657540 containerd[1572]: time="2025-09-16T04:55:07.657495183Z" level=info msg="StartContainer for \"867c4b1cf4c65d1e0bf1ee1983f5002968207b20edba4a53c57ac7c026ca3a9a\" returns successfully" Sep 16 04:55:07.660615 containerd[1572]: time="2025-09-16T04:55:07.660578684Z" level=info msg="Container bda987bc644ac6c6425627c294dd2cdc214c95eb2a2d844d4f7d56d28526ced2: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:07.668302 containerd[1572]: time="2025-09-16T04:55:07.668249247Z" level=info msg="CreateContainer within sandbox \"b3a8fcdfcb50c14556d3923106755fe6e68409e8736bbe072790a0a486096960\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bda987bc644ac6c6425627c294dd2cdc214c95eb2a2d844d4f7d56d28526ced2\"" Sep 16 04:55:07.668854 containerd[1572]: time="2025-09-16T04:55:07.668796695Z" level=info msg="StartContainer for \"bda987bc644ac6c6425627c294dd2cdc214c95eb2a2d844d4f7d56d28526ced2\"" Sep 16 04:55:07.669763 containerd[1572]: time="2025-09-16T04:55:07.669734836Z" level=info msg="connecting to shim bda987bc644ac6c6425627c294dd2cdc214c95eb2a2d844d4f7d56d28526ced2" address="unix:///run/containerd/s/07acb25416a7203a1829e8dcc09534e1b6ccac9335674de905f410de781b6784" protocol=ttrpc version=3 Sep 16 04:55:07.699045 systemd[1]: Started cri-containerd-bda987bc644ac6c6425627c294dd2cdc214c95eb2a2d844d4f7d56d28526ced2.scope - libcontainer container bda987bc644ac6c6425627c294dd2cdc214c95eb2a2d844d4f7d56d28526ced2. Sep 16 04:55:07.734981 containerd[1572]: time="2025-09-16T04:55:07.734899537Z" level=info msg="StartContainer for \"bda987bc644ac6c6425627c294dd2cdc214c95eb2a2d844d4f7d56d28526ced2\" returns successfully" Sep 16 04:55:08.176895 containerd[1572]: time="2025-09-16T04:55:08.176817655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-8b749,Uid:f6423791-ce3c-46c6-8889-ef61fdc144e3,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:55:08.177322 containerd[1572]: time="2025-09-16T04:55:08.177258232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-clthl,Uid:151ed0ba-9ed3-49b1-b342-5dcd09107105,Namespace:calico-system,Attempt:0,}" Sep 16 04:55:08.518926 systemd-networkd[1494]: calib034002e0e4: Link UP Sep 16 04:55:08.519593 systemd-networkd[1494]: calib034002e0e4: Gained carrier Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.430 [INFO][4435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0 calico-apiserver-7f449fd945- calico-apiserver f6423791-ce3c-46c6-8889-ef61fdc144e3 831 0 2025-09-16 04:54:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f449fd945 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f449fd945-8b749 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib034002e0e4 [] [] }} ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.430 [INFO][4435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.457 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" HandleID="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Workload="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.458 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" HandleID="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Workload="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fed0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f449fd945-8b749", "timestamp":"2025-09-16 04:55:08.457766785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.458 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.458 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.458 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.464 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.472 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.477 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.479 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.481 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.481 [INFO][4465] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.483 [INFO][4465] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.492 [INFO][4465] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.503 [INFO][4465] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.503 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" host="localhost" Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.503 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:08.555738 containerd[1572]: 2025-09-16 04:55:08.503 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" HandleID="k8s-pod-network.4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Workload="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" Sep 16 04:55:08.556375 containerd[1572]: 2025-09-16 04:55:08.512 [INFO][4435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0", GenerateName:"calico-apiserver-7f449fd945-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6423791-ce3c-46c6-8889-ef61fdc144e3", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f449fd945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f449fd945-8b749", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib034002e0e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:08.556375 containerd[1572]: 2025-09-16 04:55:08.512 [INFO][4435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" Sep 16 04:55:08.556375 containerd[1572]: 2025-09-16 04:55:08.512 [INFO][4435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib034002e0e4 ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" Sep 16 04:55:08.556375 containerd[1572]: 2025-09-16 04:55:08.519 [INFO][4435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" Sep 16 04:55:08.556375 containerd[1572]: 2025-09-16 04:55:08.523 [INFO][4435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0", GenerateName:"calico-apiserver-7f449fd945-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6423791-ce3c-46c6-8889-ef61fdc144e3", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f449fd945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d", Pod:"calico-apiserver-7f449fd945-8b749", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib034002e0e4", MAC:"b6:8b:19:f5:65:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:08.556375 containerd[1572]: 2025-09-16 04:55:08.546 [INFO][4435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-8b749" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--8b749-eth0" Sep 16 04:55:08.574155 kubelet[2723]: E0916 04:55:08.574094 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:08.577881 kubelet[2723]: E0916 04:55:08.577220 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:08.592552 kubelet[2723]: I0916 04:55:08.592483 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w9xwf" podStartSLOduration=40.592460882 podStartE2EDuration="40.592460882s" podCreationTimestamp="2025-09-16 04:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:55:08.592237161 +0000 UTC m=+46.536052569" watchObservedRunningTime="2025-09-16 04:55:08.592460882 +0000 UTC m=+46.536276300" Sep 16 04:55:08.611345 containerd[1572]: time="2025-09-16T04:55:08.611279464Z" level=info msg="connecting to shim 4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d" address="unix:///run/containerd/s/a2c7b52671c3b65eaa4f1b5d36c3d32afef59e99fecceb7fae5fe6a459a0073a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:08.630519 kubelet[2723]: I0916 04:55:08.630437 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zvvbz" podStartSLOduration=40.630389905 podStartE2EDuration="40.630389905s" podCreationTimestamp="2025-09-16 04:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:55:08.629164886 +0000 UTC m=+46.572980294" watchObservedRunningTime="2025-09-16 04:55:08.630389905 +0000 UTC m=+46.574205313" Sep 16 04:55:08.635661 systemd-networkd[1494]: cali40ca994fd88: Link UP Sep 16 04:55:08.636398 systemd-networkd[1494]: cali40ca994fd88: Gained carrier Sep 16 04:55:08.651177 systemd[1]: Started cri-containerd-4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d.scope - libcontainer container 4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d. Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.432 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--clthl-eth0 goldmane-7988f88666- calico-system 151ed0ba-9ed3-49b1-b342-5dcd09107105 841 0 2025-09-16 04:54:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-clthl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali40ca994fd88 [] [] }} ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.432 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-eth0" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.466 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" HandleID="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Workload="localhost-k8s-goldmane--7988f88666--clthl-eth0" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.466 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" HandleID="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Workload="localhost-k8s-goldmane--7988f88666--clthl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d76f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-clthl", "timestamp":"2025-09-16 04:55:08.466753318 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.466 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.505 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.506 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.567 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.576 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.588 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.591 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.597 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.598 [INFO][4467] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.600 [INFO][4467] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7 Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.605 [INFO][4467] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.620 [INFO][4467] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.621 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" host="localhost" Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.622 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:08.658819 containerd[1572]: 2025-09-16 04:55:08.622 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" HandleID="k8s-pod-network.1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Workload="localhost-k8s-goldmane--7988f88666--clthl-eth0" Sep 16 04:55:08.659349 containerd[1572]: 2025-09-16 04:55:08.628 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--clthl-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"151ed0ba-9ed3-49b1-b342-5dcd09107105", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-clthl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40ca994fd88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:08.659349 containerd[1572]: 2025-09-16 04:55:08.628 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-eth0" Sep 16 04:55:08.659349 containerd[1572]: 2025-09-16 04:55:08.628 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40ca994fd88 ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-eth0" Sep 16 04:55:08.659349 containerd[1572]: 2025-09-16 04:55:08.636 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-eth0" Sep 16 04:55:08.659349 containerd[1572]: 2025-09-16 04:55:08.638 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--clthl-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"151ed0ba-9ed3-49b1-b342-5dcd09107105", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7", Pod:"goldmane-7988f88666-clthl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40ca994fd88", MAC:"02:d3:6e:dc:11:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:08.659349 containerd[1572]: 2025-09-16 04:55:08.651 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" Namespace="calico-system" Pod="goldmane-7988f88666-clthl" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--clthl-eth0" Sep 16 04:55:08.686021 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:08.692925 containerd[1572]: time="2025-09-16T04:55:08.692378262Z" level=info msg="connecting to shim 1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7" address="unix:///run/containerd/s/9cc80b55af876f96921b6783eb9bd18bb3953fd909931db4407c55bc4d5583ea" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:08.723019 systemd[1]: Started cri-containerd-1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7.scope - libcontainer container 1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7. Sep 16 04:55:08.727060 containerd[1572]: time="2025-09-16T04:55:08.727013623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-8b749,Uid:f6423791-ce3c-46c6-8889-ef61fdc144e3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d\"" Sep 16 04:55:08.737357 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:08.773975 containerd[1572]: time="2025-09-16T04:55:08.773751596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-clthl,Uid:151ed0ba-9ed3-49b1-b342-5dcd09107105,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7\"" Sep 16 04:55:08.802101 systemd-networkd[1494]: cali9689236aa6b: Gained IPv6LL Sep 16 04:55:09.058124 systemd-networkd[1494]: calif86836789a9: Gained IPv6LL Sep 16 04:55:09.176605 containerd[1572]: time="2025-09-16T04:55:09.176544130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b9fb9cbc-gqtzj,Uid:fa078a04-b047-40fe-8fe4-43f64c977de0,Namespace:calico-system,Attempt:0,}" Sep 16 04:55:09.458583 systemd-networkd[1494]: cali6e9d6c39782: Link UP Sep 16 04:55:09.458805 systemd-networkd[1494]: cali6e9d6c39782: Gained carrier Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.372 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0 calico-kube-controllers-76b9fb9cbc- calico-system fa078a04-b047-40fe-8fe4-43f64c977de0 840 0 2025-09-16 04:54:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76b9fb9cbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-76b9fb9cbc-gqtzj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6e9d6c39782 [] [] }} ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.372 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.406 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" HandleID="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Workload="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.406 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" HandleID="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Workload="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000112560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-76b9fb9cbc-gqtzj", "timestamp":"2025-09-16 04:55:09.405980209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.406 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.406 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.406 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.413 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.418 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.422 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.424 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.426 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.426 [INFO][4618] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.428 [INFO][4618] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2 Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.433 [INFO][4618] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.440 [INFO][4618] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.440 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" host="localhost" Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.440 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:09.466430 containerd[1572]: 2025-09-16 04:55:09.440 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" HandleID="k8s-pod-network.dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Workload="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" Sep 16 04:55:09.468580 containerd[1572]: 2025-09-16 04:55:09.444 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0", GenerateName:"calico-kube-controllers-76b9fb9cbc-", Namespace:"calico-system", SelfLink:"", UID:"fa078a04-b047-40fe-8fe4-43f64c977de0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b9fb9cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-76b9fb9cbc-gqtzj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e9d6c39782", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:09.468580 containerd[1572]: 2025-09-16 04:55:09.445 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" Sep 16 04:55:09.468580 containerd[1572]: 2025-09-16 04:55:09.445 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e9d6c39782 ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" Sep 16 04:55:09.468580 containerd[1572]: 2025-09-16 04:55:09.448 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" Sep 16 04:55:09.468580 containerd[1572]: 2025-09-16 04:55:09.449 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0", GenerateName:"calico-kube-controllers-76b9fb9cbc-", Namespace:"calico-system", SelfLink:"", UID:"fa078a04-b047-40fe-8fe4-43f64c977de0", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b9fb9cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2", Pod:"calico-kube-controllers-76b9fb9cbc-gqtzj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6e9d6c39782", MAC:"1a:70:2b:0b:e6:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:09.468580 containerd[1572]: 2025-09-16 04:55:09.462 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" Namespace="calico-system" Pod="calico-kube-controllers-76b9fb9cbc-gqtzj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76b9fb9cbc--gqtzj-eth0" Sep 16 04:55:09.495465 containerd[1572]: time="2025-09-16T04:55:09.495411880Z" level=info msg="connecting to shim dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2" address="unix:///run/containerd/s/402f7f25813a78a2ab32ec4f0f361fa5b996324d0296f7d54e3e07c468c88912" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:09.532088 systemd[1]: Started cri-containerd-dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2.scope - libcontainer container dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2. Sep 16 04:55:09.549544 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:09.570182 systemd-networkd[1494]: calib034002e0e4: Gained IPv6LL Sep 16 04:55:09.585243 kubelet[2723]: E0916 04:55:09.585195 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:09.586224 kubelet[2723]: E0916 04:55:09.586194 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:09.877893 containerd[1572]: time="2025-09-16T04:55:09.877655714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b9fb9cbc-gqtzj,Uid:fa078a04-b047-40fe-8fe4-43f64c977de0,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2\"" Sep 16 04:55:09.903742 containerd[1572]: time="2025-09-16T04:55:09.903672338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:09.904613 containerd[1572]: time="2025-09-16T04:55:09.904565435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 16 04:55:09.905960 containerd[1572]: time="2025-09-16T04:55:09.905894761Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:09.909435 containerd[1572]: time="2025-09-16T04:55:09.909387157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:09.910042 containerd[1572]: time="2025-09-16T04:55:09.909993906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.463548423s" Sep 16 04:55:09.910042 containerd[1572]: time="2025-09-16T04:55:09.910030184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 16 04:55:09.911146 containerd[1572]: time="2025-09-16T04:55:09.911103960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:55:09.914658 containerd[1572]: time="2025-09-16T04:55:09.914614652Z" level=info msg="CreateContainer within sandbox \"12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:55:09.926672 containerd[1572]: time="2025-09-16T04:55:09.926585806Z" level=info msg="Container 6b7b9758743689099aed7a6443b39bf351685475a0c3c953c1f6a60520417d32: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:09.937037 containerd[1572]: time="2025-09-16T04:55:09.936979169Z" level=info msg="CreateContainer within sandbox \"12a3626f267121e510735c16949e84c30f17aa23ffb4c1856fc13916fdd2b28b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6b7b9758743689099aed7a6443b39bf351685475a0c3c953c1f6a60520417d32\"" Sep 16 04:55:09.937712 containerd[1572]: time="2025-09-16T04:55:09.937653425Z" level=info msg="StartContainer for \"6b7b9758743689099aed7a6443b39bf351685475a0c3c953c1f6a60520417d32\"" Sep 16 04:55:09.939179 containerd[1572]: time="2025-09-16T04:55:09.939145255Z" level=info msg="connecting to shim 6b7b9758743689099aed7a6443b39bf351685475a0c3c953c1f6a60520417d32" address="unix:///run/containerd/s/2246c6fd7baf3c108b17b51e6af6c1d3888ef73fae4ddb46ae5705417443d7b8" protocol=ttrpc version=3 Sep 16 04:55:09.968198 systemd[1]: Started cri-containerd-6b7b9758743689099aed7a6443b39bf351685475a0c3c953c1f6a60520417d32.scope - libcontainer container 6b7b9758743689099aed7a6443b39bf351685475a0c3c953c1f6a60520417d32. Sep 16 04:55:10.035134 containerd[1572]: time="2025-09-16T04:55:10.035075063Z" level=info msg="StartContainer for \"6b7b9758743689099aed7a6443b39bf351685475a0c3c953c1f6a60520417d32\" returns successfully" Sep 16 04:55:10.177769 containerd[1572]: time="2025-09-16T04:55:10.177509564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-sk5l6,Uid:fcffb6e8-9720-44b6-b6de-3dc13e3c8432,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:55:10.250182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455008013.mount: Deactivated successfully. Sep 16 04:55:10.350835 systemd-networkd[1494]: cali4c829a5df5e: Link UP Sep 16 04:55:10.351509 systemd-networkd[1494]: cali4c829a5df5e: Gained carrier Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.288 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0 calico-apiserver-7f449fd945- calico-apiserver fcffb6e8-9720-44b6-b6de-3dc13e3c8432 842 0 2025-09-16 04:54:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f449fd945 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f449fd945-sk5l6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c829a5df5e [] [] }} ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.289 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.314 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" HandleID="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Workload="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.314 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" HandleID="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Workload="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c71c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f449fd945-sk5l6", "timestamp":"2025-09-16 04:55:10.314690604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.314 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.314 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.315 [INFO][4740] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.321 [INFO][4740] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.326 [INFO][4740] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.330 [INFO][4740] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.332 [INFO][4740] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.334 [INFO][4740] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.334 [INFO][4740] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.335 [INFO][4740] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.338 [INFO][4740] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.345 [INFO][4740] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.345 [INFO][4740] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" host="localhost" Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.345 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:10.367757 containerd[1572]: 2025-09-16 04:55:10.345 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" HandleID="k8s-pod-network.a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Workload="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" Sep 16 04:55:10.368430 containerd[1572]: 2025-09-16 04:55:10.348 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0", GenerateName:"calico-apiserver-7f449fd945-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcffb6e8-9720-44b6-b6de-3dc13e3c8432", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f449fd945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f449fd945-sk5l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c829a5df5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:10.368430 containerd[1572]: 2025-09-16 04:55:10.348 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" Sep 16 04:55:10.368430 containerd[1572]: 2025-09-16 04:55:10.348 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c829a5df5e ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" Sep 16 04:55:10.368430 containerd[1572]: 2025-09-16 04:55:10.354 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" Sep 16 04:55:10.368430 containerd[1572]: 2025-09-16 04:55:10.354 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0", GenerateName:"calico-apiserver-7f449fd945-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcffb6e8-9720-44b6-b6de-3dc13e3c8432", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f449fd945", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea", Pod:"calico-apiserver-7f449fd945-sk5l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c829a5df5e", MAC:"7a:d0:21:5e:0f:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:10.368430 containerd[1572]: 2025-09-16 04:55:10.364 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" Namespace="calico-apiserver" Pod="calico-apiserver-7f449fd945-sk5l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f449fd945--sk5l6-eth0" Sep 16 04:55:10.394677 containerd[1572]: time="2025-09-16T04:55:10.394614604Z" level=info msg="connecting to shim a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea" address="unix:///run/containerd/s/b9935df4ff1b6f89e6b5bbdd67d475f420ced6d231d2f908f1cca25362e9b4aa" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:10.434822 systemd[1]: Started cri-containerd-a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea.scope - libcontainer container a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea. Sep 16 04:55:10.459988 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:10.502991 containerd[1572]: time="2025-09-16T04:55:10.502948219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f449fd945-sk5l6,Uid:fcffb6e8-9720-44b6-b6de-3dc13e3c8432,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea\"" Sep 16 04:55:10.530136 systemd-networkd[1494]: cali40ca994fd88: Gained IPv6LL Sep 16 04:55:10.592258 kubelet[2723]: E0916 04:55:10.592200 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:10.592834 kubelet[2723]: E0916 04:55:10.592352 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:10.604282 kubelet[2723]: I0916 04:55:10.604128 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-544ccf8767-bksvm" podStartSLOduration=3.758039791 podStartE2EDuration="8.604104469s" podCreationTimestamp="2025-09-16 04:55:02 +0000 UTC" firstStartedPulling="2025-09-16 04:55:05.064822145 +0000 UTC m=+43.008637553" lastFinishedPulling="2025-09-16 04:55:09.910886813 +0000 UTC m=+47.854702231" observedRunningTime="2025-09-16 04:55:10.603347458 +0000 UTC m=+48.547162876" watchObservedRunningTime="2025-09-16 04:55:10.604104469 +0000 UTC m=+48.547919867" Sep 16 04:55:10.722154 systemd-networkd[1494]: cali6e9d6c39782: Gained IPv6LL Sep 16 04:55:11.177439 containerd[1572]: time="2025-09-16T04:55:11.177382791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ld4qv,Uid:6287dfa0-7875-4f8b-8630-23e2f6643cbc,Namespace:calico-system,Attempt:0,}" Sep 16 04:55:11.280964 systemd-networkd[1494]: cali4f4eebe0519: Link UP Sep 16 04:55:11.281217 systemd-networkd[1494]: cali4f4eebe0519: Gained carrier Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.213 [INFO][4805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ld4qv-eth0 csi-node-driver- calico-system 6287dfa0-7875-4f8b-8630-23e2f6643cbc 721 0 2025-09-16 04:54:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ld4qv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4f4eebe0519 [] [] }} ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.213 [INFO][4805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-eth0" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.239 [INFO][4820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" HandleID="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Workload="localhost-k8s-csi--node--driver--ld4qv-eth0" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.239 [INFO][4820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" HandleID="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Workload="localhost-k8s-csi--node--driver--ld4qv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b6630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ld4qv", "timestamp":"2025-09-16 04:55:11.239709429 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.240 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.241 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.241 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.248 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.253 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.256 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.258 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.261 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.261 [INFO][4820] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.262 [INFO][4820] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9 Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.267 [INFO][4820] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.274 [INFO][4820] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.274 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" host="localhost" Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.274 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:55:11.298359 containerd[1572]: 2025-09-16 04:55:11.274 [INFO][4820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" HandleID="k8s-pod-network.84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Workload="localhost-k8s-csi--node--driver--ld4qv-eth0" Sep 16 04:55:11.299044 containerd[1572]: 2025-09-16 04:55:11.277 [INFO][4805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ld4qv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6287dfa0-7875-4f8b-8630-23e2f6643cbc", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ld4qv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f4eebe0519", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:11.299044 containerd[1572]: 2025-09-16 04:55:11.277 [INFO][4805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-eth0" Sep 16 04:55:11.299044 containerd[1572]: 2025-09-16 04:55:11.277 [INFO][4805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f4eebe0519 ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-eth0" Sep 16 04:55:11.299044 containerd[1572]: 2025-09-16 04:55:11.280 [INFO][4805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-eth0" Sep 16 04:55:11.299044 containerd[1572]: 2025-09-16 04:55:11.280 [INFO][4805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ld4qv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6287dfa0-7875-4f8b-8630-23e2f6643cbc", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9", Pod:"csi-node-driver-ld4qv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f4eebe0519", MAC:"52:f4:58:fb:43:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:55:11.299044 containerd[1572]: 2025-09-16 04:55:11.294 [INFO][4805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" Namespace="calico-system" Pod="csi-node-driver-ld4qv" WorkloadEndpoint="localhost-k8s-csi--node--driver--ld4qv-eth0" Sep 16 04:55:11.323246 containerd[1572]: time="2025-09-16T04:55:11.323199604Z" level=info msg="connecting to shim 84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9" address="unix:///run/containerd/s/77d2aac5720331edd94284810c27193c9769e48b7c9c21100445e82e4149ce9f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:55:11.369065 systemd[1]: Started cri-containerd-84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9.scope - libcontainer container 84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9. Sep 16 04:55:11.392413 systemd-resolved[1427]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 16 04:55:11.440265 containerd[1572]: time="2025-09-16T04:55:11.439847072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ld4qv,Uid:6287dfa0-7875-4f8b-8630-23e2f6643cbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9\"" Sep 16 04:55:12.258072 systemd-networkd[1494]: cali4c829a5df5e: Gained IPv6LL Sep 16 04:55:12.424608 containerd[1572]: time="2025-09-16T04:55:12.424530005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:12.425293 containerd[1572]: time="2025-09-16T04:55:12.425242283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 16 04:55:12.426468 containerd[1572]: time="2025-09-16T04:55:12.426420765Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:12.428641 containerd[1572]: time="2025-09-16T04:55:12.428607800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:12.429195 containerd[1572]: time="2025-09-16T04:55:12.429145729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.518001704s" Sep 16 04:55:12.429195 containerd[1572]: time="2025-09-16T04:55:12.429190824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:55:12.431603 containerd[1572]: time="2025-09-16T04:55:12.431253546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:55:12.432216 containerd[1572]: time="2025-09-16T04:55:12.432195253Z" level=info msg="CreateContainer within sandbox \"4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:55:12.578065 systemd-networkd[1494]: cali4f4eebe0519: Gained IPv6LL Sep 16 04:55:12.936751 containerd[1572]: time="2025-09-16T04:55:12.936627955Z" level=info msg="Container 4554cb533b791164c8f0d180d273869c25dff43571ff4d6589a5b5c3074a9227: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:12.944879 containerd[1572]: time="2025-09-16T04:55:12.944811827Z" level=info msg="CreateContainer within sandbox \"4e148e3815b7967c85d93707da96e55f64a7238684fa5c7cb88cab611114201d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4554cb533b791164c8f0d180d273869c25dff43571ff4d6589a5b5c3074a9227\"" Sep 16 04:55:12.945456 containerd[1572]: time="2025-09-16T04:55:12.945417293Z" level=info msg="StartContainer for \"4554cb533b791164c8f0d180d273869c25dff43571ff4d6589a5b5c3074a9227\"" Sep 16 04:55:12.946734 containerd[1572]: time="2025-09-16T04:55:12.946696474Z" level=info msg="connecting to shim 4554cb533b791164c8f0d180d273869c25dff43571ff4d6589a5b5c3074a9227" address="unix:///run/containerd/s/a2c7b52671c3b65eaa4f1b5d36c3d32afef59e99fecceb7fae5fe6a459a0073a" protocol=ttrpc version=3 Sep 16 04:55:13.004163 systemd[1]: Started cri-containerd-4554cb533b791164c8f0d180d273869c25dff43571ff4d6589a5b5c3074a9227.scope - libcontainer container 4554cb533b791164c8f0d180d273869c25dff43571ff4d6589a5b5c3074a9227. Sep 16 04:55:13.073889 containerd[1572]: time="2025-09-16T04:55:13.073369439Z" level=info msg="StartContainer for \"4554cb533b791164c8f0d180d273869c25dff43571ff4d6589a5b5c3074a9227\" returns successfully" Sep 16 04:55:13.627048 kubelet[2723]: I0916 04:55:13.626838 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f449fd945-8b749" podStartSLOduration=30.925142999 podStartE2EDuration="34.626816123s" podCreationTimestamp="2025-09-16 04:54:39 +0000 UTC" firstStartedPulling="2025-09-16 04:55:08.728680412 +0000 UTC m=+46.672495820" lastFinishedPulling="2025-09-16 04:55:12.430353536 +0000 UTC m=+50.374168944" observedRunningTime="2025-09-16 04:55:13.62525871 +0000 UTC m=+51.569074139" watchObservedRunningTime="2025-09-16 04:55:13.626816123 +0000 UTC m=+51.570631521" Sep 16 04:55:14.605770 kubelet[2723]: I0916 04:55:14.605725 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:55:16.413122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714247764.mount: Deactivated successfully. Sep 16 04:55:17.561652 containerd[1572]: time="2025-09-16T04:55:17.561565169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:17.567120 containerd[1572]: time="2025-09-16T04:55:17.562504712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 16 04:55:17.567120 containerd[1572]: time="2025-09-16T04:55:17.563664950Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:17.567362 containerd[1572]: time="2025-09-16T04:55:17.566474712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.135192822s" Sep 16 04:55:17.567362 containerd[1572]: time="2025-09-16T04:55:17.567249456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 16 04:55:17.568192 containerd[1572]: time="2025-09-16T04:55:17.568146048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:17.568640 containerd[1572]: time="2025-09-16T04:55:17.568577498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:55:17.581352 containerd[1572]: time="2025-09-16T04:55:17.581294983Z" level=info msg="CreateContainer within sandbox \"1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:55:17.590449 containerd[1572]: time="2025-09-16T04:55:17.590407755Z" level=info msg="Container be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:17.598667 containerd[1572]: time="2025-09-16T04:55:17.598629654Z" level=info msg="CreateContainer within sandbox \"1e9f1659d6533b595e10e1692cde557e854a274bcab3f3f30a39d58b058610f7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\"" Sep 16 04:55:17.601648 containerd[1572]: time="2025-09-16T04:55:17.601364035Z" level=info msg="StartContainer for \"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\"" Sep 16 04:55:17.602707 containerd[1572]: time="2025-09-16T04:55:17.602681638Z" level=info msg="connecting to shim be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190" address="unix:///run/containerd/s/9cc80b55af876f96921b6783eb9bd18bb3953fd909931db4407c55bc4d5583ea" protocol=ttrpc version=3 Sep 16 04:55:17.644041 systemd[1]: Started cri-containerd-be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190.scope - libcontainer container be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190. Sep 16 04:55:17.727169 containerd[1572]: time="2025-09-16T04:55:17.727097065Z" level=info msg="StartContainer for \"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\" returns successfully" Sep 16 04:55:17.889128 kubelet[2723]: I0916 04:55:17.888566 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:55:18.636443 containerd[1572]: time="2025-09-16T04:55:18.636399464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350\" id:\"4217f5bbe3a7d5901da644d41a8665bc7f1620e91f9091c443b82b71b9b6cc87\" pid:5000 exited_at:{seconds:1757998518 nanos:635845575}" Sep 16 04:55:18.897676 kubelet[2723]: I0916 04:55:18.897486 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-clthl" podStartSLOduration=29.104258546 podStartE2EDuration="37.897464707s" podCreationTimestamp="2025-09-16 04:54:41 +0000 UTC" firstStartedPulling="2025-09-16 04:55:08.775242244 +0000 UTC m=+46.719057652" lastFinishedPulling="2025-09-16 04:55:17.568448405 +0000 UTC m=+55.512263813" observedRunningTime="2025-09-16 04:55:18.897161999 +0000 UTC m=+56.840977407" watchObservedRunningTime="2025-09-16 04:55:18.897464707 +0000 UTC m=+56.841280115" Sep 16 04:55:19.703387 containerd[1572]: time="2025-09-16T04:55:19.703332999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\" id:\"9e8f1f91c6d11ae2039b889179ce2f8e0acc22c3a8c127a1c50e108d57d3cac0\" pid:5030 exit_status:1 exited_at:{seconds:1757998519 nanos:702731340}" Sep 16 04:55:19.882322 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:42200.service - OpenSSH per-connection server daemon (10.0.0.1:42200). Sep 16 04:55:19.975993 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 42200 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:19.978512 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:19.986350 systemd-logind[1508]: New session 8 of user core. Sep 16 04:55:19.992980 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:55:20.176328 sshd[5051]: Connection closed by 10.0.0.1 port 42200 Sep 16 04:55:20.176599 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:20.182010 systemd-logind[1508]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:55:20.182145 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:42200.service: Deactivated successfully. Sep 16 04:55:20.184412 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:55:20.187555 systemd-logind[1508]: Removed session 8. Sep 16 04:55:20.687239 containerd[1572]: time="2025-09-16T04:55:20.687176457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:20.688134 containerd[1572]: time="2025-09-16T04:55:20.688089550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 16 04:55:20.690770 containerd[1572]: time="2025-09-16T04:55:20.690036574Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:20.695909 containerd[1572]: time="2025-09-16T04:55:20.695797694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:20.697637 containerd[1572]: time="2025-09-16T04:55:20.697593372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.128980057s" Sep 16 04:55:20.697685 containerd[1572]: time="2025-09-16T04:55:20.697635201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 16 04:55:20.700124 containerd[1572]: time="2025-09-16T04:55:20.700074068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:55:20.713105 containerd[1572]: time="2025-09-16T04:55:20.713045837Z" level=info msg="CreateContainer within sandbox \"dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:55:20.732979 containerd[1572]: time="2025-09-16T04:55:20.732937091Z" level=info msg="Container 1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:20.754628 containerd[1572]: time="2025-09-16T04:55:20.754574009Z" level=info msg="CreateContainer within sandbox \"dfdea5c4ba8bfe977eb2313af7142a8ecc7996945b3dc90370dbc44cc766d8b2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a\"" Sep 16 04:55:20.755880 containerd[1572]: time="2025-09-16T04:55:20.755779452Z" level=info msg="StartContainer for \"1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a\"" Sep 16 04:55:20.757517 containerd[1572]: time="2025-09-16T04:55:20.757485372Z" level=info msg="connecting to shim 1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a" address="unix:///run/containerd/s/402f7f25813a78a2ab32ec4f0f361fa5b996324d0296f7d54e3e07c468c88912" protocol=ttrpc version=3 Sep 16 04:55:20.781110 systemd[1]: Started cri-containerd-1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a.scope - libcontainer container 1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a. Sep 16 04:55:20.834910 containerd[1572]: time="2025-09-16T04:55:20.834848104Z" level=info msg="StartContainer for \"1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a\" returns successfully" Sep 16 04:55:20.847824 containerd[1572]: time="2025-09-16T04:55:20.847753910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\" id:\"d94f9f59adc2920d59e7077636febaa84506147f412a07851785595531620cbf\" pid:5077 exit_status:1 exited_at:{seconds:1757998520 nanos:847381511}" Sep 16 04:55:21.088188 containerd[1572]: time="2025-09-16T04:55:21.088125943Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:21.089071 containerd[1572]: time="2025-09-16T04:55:21.089035720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:55:21.091390 containerd[1572]: time="2025-09-16T04:55:21.091325656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 391.210261ms" Sep 16 04:55:21.091390 containerd[1572]: time="2025-09-16T04:55:21.091386220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 16 04:55:21.092335 containerd[1572]: time="2025-09-16T04:55:21.092296618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:55:21.093383 containerd[1572]: time="2025-09-16T04:55:21.093342410Z" level=info msg="CreateContainer within sandbox \"a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:55:21.102400 containerd[1572]: time="2025-09-16T04:55:21.102347216Z" level=info msg="Container c6a37210a9488c226f8df3e090f7a0e172e6ea624f52a99aa01e0cb66c861f92: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:21.142590 containerd[1572]: time="2025-09-16T04:55:21.142545180Z" level=info msg="CreateContainer within sandbox \"a6e093c58137223b4da37bddf58ee1841c2a577d33e034ed4f9ebae4f45fedea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6a37210a9488c226f8df3e090f7a0e172e6ea624f52a99aa01e0cb66c861f92\"" Sep 16 04:55:21.143001 containerd[1572]: time="2025-09-16T04:55:21.142974125Z" level=info msg="StartContainer for \"c6a37210a9488c226f8df3e090f7a0e172e6ea624f52a99aa01e0cb66c861f92\"" Sep 16 04:55:21.144064 containerd[1572]: time="2025-09-16T04:55:21.144032310Z" level=info msg="connecting to shim c6a37210a9488c226f8df3e090f7a0e172e6ea624f52a99aa01e0cb66c861f92" address="unix:///run/containerd/s/b9935df4ff1b6f89e6b5bbdd67d475f420ced6d231d2f908f1cca25362e9b4aa" protocol=ttrpc version=3 Sep 16 04:55:21.172993 systemd[1]: Started cri-containerd-c6a37210a9488c226f8df3e090f7a0e172e6ea624f52a99aa01e0cb66c861f92.scope - libcontainer container c6a37210a9488c226f8df3e090f7a0e172e6ea624f52a99aa01e0cb66c861f92. Sep 16 04:55:21.219789 containerd[1572]: time="2025-09-16T04:55:21.219723660Z" level=info msg="StartContainer for \"c6a37210a9488c226f8df3e090f7a0e172e6ea624f52a99aa01e0cb66c861f92\" returns successfully" Sep 16 04:55:21.660884 kubelet[2723]: I0916 04:55:21.660090 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f449fd945-sk5l6" podStartSLOduration=32.072407554 podStartE2EDuration="42.660065529s" podCreationTimestamp="2025-09-16 04:54:39 +0000 UTC" firstStartedPulling="2025-09-16 04:55:10.504460117 +0000 UTC m=+48.448275525" lastFinishedPulling="2025-09-16 04:55:21.092118092 +0000 UTC m=+59.035933500" observedRunningTime="2025-09-16 04:55:21.644949215 +0000 UTC m=+59.588764643" watchObservedRunningTime="2025-09-16 04:55:21.660065529 +0000 UTC m=+59.603880937" Sep 16 04:55:21.660884 kubelet[2723]: I0916 04:55:21.660570 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76b9fb9cbc-gqtzj" podStartSLOduration=28.841536965 podStartE2EDuration="39.660560437s" podCreationTimestamp="2025-09-16 04:54:42 +0000 UTC" firstStartedPulling="2025-09-16 04:55:09.87976856 +0000 UTC m=+47.823583968" lastFinishedPulling="2025-09-16 04:55:20.698792032 +0000 UTC m=+58.642607440" observedRunningTime="2025-09-16 04:55:21.659387817 +0000 UTC m=+59.603203225" watchObservedRunningTime="2025-09-16 04:55:21.660560437 +0000 UTC m=+59.604375845" Sep 16 04:55:21.690299 containerd[1572]: time="2025-09-16T04:55:21.690240216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a\" id:\"78a7f8e6b210fa3be4e82cdcf2c9e80bc14c6f4d14eeb1cd5c4a334c860105d5\" pid:5185 exited_at:{seconds:1757998521 nanos:689892212}" Sep 16 04:55:22.634830 kubelet[2723]: I0916 04:55:22.634777 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:55:22.825626 containerd[1572]: time="2025-09-16T04:55:22.825559050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:22.826720 containerd[1572]: time="2025-09-16T04:55:22.826688667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 16 04:55:22.828097 containerd[1572]: time="2025-09-16T04:55:22.828045222Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:22.830228 containerd[1572]: time="2025-09-16T04:55:22.830197897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:22.830867 containerd[1572]: time="2025-09-16T04:55:22.830823465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.738493705s" Sep 16 04:55:22.830927 containerd[1572]: time="2025-09-16T04:55:22.830881344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 16 04:55:22.833108 containerd[1572]: time="2025-09-16T04:55:22.833075727Z" level=info msg="CreateContainer within sandbox \"84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:55:22.844684 containerd[1572]: time="2025-09-16T04:55:22.844639417Z" level=info msg="Container a7e61a802717c436702de9cf8ce08281718db95bf4b48bf92aaa74391c444cce: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:22.856236 containerd[1572]: time="2025-09-16T04:55:22.856197055Z" level=info msg="CreateContainer within sandbox \"84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a7e61a802717c436702de9cf8ce08281718db95bf4b48bf92aaa74391c444cce\"" Sep 16 04:55:22.856622 containerd[1572]: time="2025-09-16T04:55:22.856592490Z" level=info msg="StartContainer for \"a7e61a802717c436702de9cf8ce08281718db95bf4b48bf92aaa74391c444cce\"" Sep 16 04:55:22.858033 containerd[1572]: time="2025-09-16T04:55:22.858004861Z" level=info msg="connecting to shim a7e61a802717c436702de9cf8ce08281718db95bf4b48bf92aaa74391c444cce" address="unix:///run/containerd/s/77d2aac5720331edd94284810c27193c9769e48b7c9c21100445e82e4149ce9f" protocol=ttrpc version=3 Sep 16 04:55:22.889998 systemd[1]: Started cri-containerd-a7e61a802717c436702de9cf8ce08281718db95bf4b48bf92aaa74391c444cce.scope - libcontainer container a7e61a802717c436702de9cf8ce08281718db95bf4b48bf92aaa74391c444cce. Sep 16 04:55:22.960126 containerd[1572]: time="2025-09-16T04:55:22.960049318Z" level=info msg="StartContainer for \"a7e61a802717c436702de9cf8ce08281718db95bf4b48bf92aaa74391c444cce\" returns successfully" Sep 16 04:55:22.962182 containerd[1572]: time="2025-09-16T04:55:22.962137180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:55:24.416739 containerd[1572]: time="2025-09-16T04:55:24.416666914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:24.417676 containerd[1572]: time="2025-09-16T04:55:24.417654940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 16 04:55:24.419199 containerd[1572]: time="2025-09-16T04:55:24.419179014Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:24.421298 containerd[1572]: time="2025-09-16T04:55:24.421269417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:55:24.421786 containerd[1572]: time="2025-09-16T04:55:24.421763945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.459504954s" Sep 16 04:55:24.421847 containerd[1572]: time="2025-09-16T04:55:24.421792020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 16 04:55:24.424022 containerd[1572]: time="2025-09-16T04:55:24.423975562Z" level=info msg="CreateContainer within sandbox \"84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:55:24.431520 containerd[1572]: time="2025-09-16T04:55:24.431452286Z" level=info msg="Container 6a0ef14c897f6614e8f15f6fa608ed63526e1e971c8026adc2bf5d8938cac156: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:55:24.441518 containerd[1572]: time="2025-09-16T04:55:24.441469654Z" level=info msg="CreateContainer within sandbox \"84cb44bc7234cac27b1508bbbe6f503cbca436f94e761dac54ce47c1cdeaf8c9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6a0ef14c897f6614e8f15f6fa608ed63526e1e971c8026adc2bf5d8938cac156\"" Sep 16 04:55:24.442070 containerd[1572]: time="2025-09-16T04:55:24.442029480Z" level=info msg="StartContainer for \"6a0ef14c897f6614e8f15f6fa608ed63526e1e971c8026adc2bf5d8938cac156\"" Sep 16 04:55:24.443609 containerd[1572]: time="2025-09-16T04:55:24.443582510Z" level=info msg="connecting to shim 6a0ef14c897f6614e8f15f6fa608ed63526e1e971c8026adc2bf5d8938cac156" address="unix:///run/containerd/s/77d2aac5720331edd94284810c27193c9769e48b7c9c21100445e82e4149ce9f" protocol=ttrpc version=3 Sep 16 04:55:24.467017 systemd[1]: Started cri-containerd-6a0ef14c897f6614e8f15f6fa608ed63526e1e971c8026adc2bf5d8938cac156.scope - libcontainer container 6a0ef14c897f6614e8f15f6fa608ed63526e1e971c8026adc2bf5d8938cac156. Sep 16 04:55:24.512520 containerd[1572]: time="2025-09-16T04:55:24.512451863Z" level=info msg="StartContainer for \"6a0ef14c897f6614e8f15f6fa608ed63526e1e971c8026adc2bf5d8938cac156\" returns successfully" Sep 16 04:55:24.736347 kubelet[2723]: I0916 04:55:24.735841 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ld4qv" podStartSLOduration=29.755001016 podStartE2EDuration="42.735667568s" podCreationTimestamp="2025-09-16 04:54:42 +0000 UTC" firstStartedPulling="2025-09-16 04:55:11.44201429 +0000 UTC m=+49.385829698" lastFinishedPulling="2025-09-16 04:55:24.422680832 +0000 UTC m=+62.366496250" observedRunningTime="2025-09-16 04:55:24.735087413 +0000 UTC m=+62.678902821" watchObservedRunningTime="2025-09-16 04:55:24.735667568 +0000 UTC m=+62.679482976" Sep 16 04:55:25.050409 containerd[1572]: time="2025-09-16T04:55:25.050356878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a\" id:\"31135c5c36c0d94f827edc25bffaf55a8902a7ab7bed6b365e9961fcaf1ef9a2\" pid:5290 exited_at:{seconds:1757998525 nanos:50025717}" Sep 16 04:55:25.094539 containerd[1572]: time="2025-09-16T04:55:25.094469403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\" id:\"7acf36c0be2f6aa5190d3da22ff172e14e7e8b70e9a630584b212b063f50861c\" pid:5309 exit_status:1 exited_at:{seconds:1757998525 nanos:94055531}" Sep 16 04:55:25.192433 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:41474.service - OpenSSH per-connection server daemon (10.0.0.1:41474). Sep 16 04:55:25.272852 sshd[5325]: Accepted publickey for core from 10.0.0.1 port 41474 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:25.275003 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:25.282383 systemd-logind[1508]: New session 9 of user core. Sep 16 04:55:25.287296 kubelet[2723]: I0916 04:55:25.287248 2723 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:55:25.287296 kubelet[2723]: I0916 04:55:25.287309 2723 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:55:25.289036 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:55:25.422309 sshd[5328]: Connection closed by 10.0.0.1 port 41474 Sep 16 04:55:25.422621 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:25.429633 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:41474.service: Deactivated successfully. Sep 16 04:55:25.431911 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:55:25.434094 systemd-logind[1508]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:55:25.436053 systemd-logind[1508]: Removed session 9. Sep 16 04:55:30.434876 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:41594.service - OpenSSH per-connection server daemon (10.0.0.1:41594). Sep 16 04:55:30.489740 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 41594 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:30.491256 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:30.495516 systemd-logind[1508]: New session 10 of user core. Sep 16 04:55:30.505000 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:55:30.617005 sshd[5348]: Connection closed by 10.0.0.1 port 41594 Sep 16 04:55:30.617388 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:30.629967 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:41594.service: Deactivated successfully. Sep 16 04:55:30.631943 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:55:30.632740 systemd-logind[1508]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:55:30.635745 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:41610.service - OpenSSH per-connection server daemon (10.0.0.1:41610). Sep 16 04:55:30.636511 systemd-logind[1508]: Removed session 10. Sep 16 04:55:30.696329 sshd[5362]: Accepted publickey for core from 10.0.0.1 port 41610 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:30.698410 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:30.703058 systemd-logind[1508]: New session 11 of user core. Sep 16 04:55:30.713999 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:55:30.868031 sshd[5365]: Connection closed by 10.0.0.1 port 41610 Sep 16 04:55:30.868616 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:30.883295 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:41610.service: Deactivated successfully. Sep 16 04:55:30.885841 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:55:30.888223 systemd-logind[1508]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:55:30.892630 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:41614.service - OpenSSH per-connection server daemon (10.0.0.1:41614). Sep 16 04:55:30.894123 systemd-logind[1508]: Removed session 11. Sep 16 04:55:30.959987 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 41614 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:30.961693 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:30.966345 systemd-logind[1508]: New session 12 of user core. Sep 16 04:55:30.976044 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:55:31.093591 sshd[5379]: Connection closed by 10.0.0.1 port 41614 Sep 16 04:55:31.093925 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:31.099211 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:41614.service: Deactivated successfully. Sep 16 04:55:31.101510 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:55:31.102436 systemd-logind[1508]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:55:31.103690 systemd-logind[1508]: Removed session 12. Sep 16 04:55:32.177141 kubelet[2723]: E0916 04:55:32.176648 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:36.111160 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:41624.service - OpenSSH per-connection server daemon (10.0.0.1:41624). Sep 16 04:55:36.176233 sshd[5396]: Accepted publickey for core from 10.0.0.1 port 41624 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:36.178297 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:36.183338 systemd-logind[1508]: New session 13 of user core. Sep 16 04:55:36.194036 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:55:36.306394 sshd[5399]: Connection closed by 10.0.0.1 port 41624 Sep 16 04:55:36.306758 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:36.311097 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:41624.service: Deactivated successfully. Sep 16 04:55:36.313366 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:55:36.314282 systemd-logind[1508]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:55:36.315625 systemd-logind[1508]: Removed session 13. Sep 16 04:55:41.324934 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:54930.service - OpenSSH per-connection server daemon (10.0.0.1:54930). Sep 16 04:55:41.485671 sshd[5417]: Accepted publickey for core from 10.0.0.1 port 54930 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:41.488088 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:41.493587 systemd-logind[1508]: New session 14 of user core. Sep 16 04:55:41.507108 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:55:41.658003 sshd[5420]: Connection closed by 10.0.0.1 port 54930 Sep 16 04:55:41.658520 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:41.665539 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:54930.service: Deactivated successfully. Sep 16 04:55:41.668050 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:55:41.668916 systemd-logind[1508]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:55:41.670361 systemd-logind[1508]: Removed session 14. Sep 16 04:55:46.671972 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:54936.service - OpenSSH per-connection server daemon (10.0.0.1:54936). Sep 16 04:55:46.767720 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 54936 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:46.769822 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:46.775035 systemd-logind[1508]: New session 15 of user core. Sep 16 04:55:46.786079 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:55:46.997936 sshd[5442]: Connection closed by 10.0.0.1 port 54936 Sep 16 04:55:46.998206 sshd-session[5439]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:47.005701 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:54936.service: Deactivated successfully. Sep 16 04:55:47.008827 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:55:47.009824 systemd-logind[1508]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:55:47.012211 systemd-logind[1508]: Removed session 15. Sep 16 04:55:48.644028 containerd[1572]: time="2025-09-16T04:55:48.643954642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e74667ab5fe313fc1f609545caa50634857912369eae8b8ab64bddb061776350\" id:\"14d338454e52a7defe5e23d4a74160019d00ea1b7579d100cf1adc31e0e8b387\" pid:5467 exited_at:{seconds:1757998548 nanos:643556372}" Sep 16 04:55:49.661296 containerd[1572]: time="2025-09-16T04:55:49.661228554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a\" id:\"21c4a8d950a6a92aaced1cc4b85b01194c0bfdbdeb1a5bed39da6b816a46197f\" pid:5491 exited_at:{seconds:1757998549 nanos:661000309}" Sep 16 04:55:50.108814 kubelet[2723]: I0916 04:55:50.108284 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:55:52.011448 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:38800.service - OpenSSH per-connection server daemon (10.0.0.1:38800). Sep 16 04:55:52.081469 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 38800 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:52.083384 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:52.088794 systemd-logind[1508]: New session 16 of user core. Sep 16 04:55:52.096016 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:55:52.274733 sshd[5510]: Connection closed by 10.0.0.1 port 38800 Sep 16 04:55:52.275309 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:52.284052 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:38800.service: Deactivated successfully. Sep 16 04:55:52.287620 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:55:52.288617 systemd-logind[1508]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:55:52.293193 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:38808.service - OpenSSH per-connection server daemon (10.0.0.1:38808). Sep 16 04:55:52.294527 systemd-logind[1508]: Removed session 16. Sep 16 04:55:52.345932 sshd[5524]: Accepted publickey for core from 10.0.0.1 port 38808 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:52.349523 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:52.359066 systemd-logind[1508]: New session 17 of user core. Sep 16 04:55:52.367219 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:55:52.672789 sshd[5527]: Connection closed by 10.0.0.1 port 38808 Sep 16 04:55:52.674239 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:52.683260 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:38808.service: Deactivated successfully. Sep 16 04:55:52.685412 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:55:52.686520 systemd-logind[1508]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:55:52.691261 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Sep 16 04:55:52.692481 systemd-logind[1508]: Removed session 17. Sep 16 04:55:52.767882 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:52.769264 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:52.773902 systemd-logind[1508]: New session 18 of user core. Sep 16 04:55:52.789091 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:55:53.176441 kubelet[2723]: E0916 04:55:53.176189 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:54.179888 kubelet[2723]: E0916 04:55:54.178729 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:55:54.491760 sshd[5541]: Connection closed by 10.0.0.1 port 38822 Sep 16 04:55:54.493245 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:54.506899 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:38822.service: Deactivated successfully. Sep 16 04:55:54.510529 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:55:54.510837 systemd[1]: session-18.scope: Consumed 663ms CPU time, 72.6M memory peak. Sep 16 04:55:54.515365 systemd-logind[1508]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:55:54.516979 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:38838.service - OpenSSH per-connection server daemon (10.0.0.1:38838). Sep 16 04:55:54.519366 systemd-logind[1508]: Removed session 18. Sep 16 04:55:54.576292 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:54.578242 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:54.583749 systemd-logind[1508]: New session 19 of user core. Sep 16 04:55:54.588144 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:55:55.035170 sshd[5565]: Connection closed by 10.0.0.1 port 38838 Sep 16 04:55:55.036708 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:55.042052 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:38852.service - OpenSSH per-connection server daemon (10.0.0.1:38852). Sep 16 04:55:55.050617 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:38838.service: Deactivated successfully. Sep 16 04:55:55.053925 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:55:55.055378 systemd-logind[1508]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:55:55.057213 systemd-logind[1508]: Removed session 19. Sep 16 04:55:55.108221 containerd[1572]: time="2025-09-16T04:55:55.108172972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ab08c7f79d5af59df8d60049bf7545584e3aeda7a519411747c0aa2b2f6c52a\" id:\"dd09fe977105b70895519c8eee49c213de932d8cbfcea9e31483b4669fdf9713\" pid:5599 exited_at:{seconds:1757998555 nanos:107587327}" Sep 16 04:55:55.115520 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 38852 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:55:55.115369 sshd-session[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:55:55.120134 systemd-logind[1508]: New session 20 of user core. Sep 16 04:55:55.129282 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:55:55.227342 containerd[1572]: time="2025-09-16T04:55:55.227270712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\" id:\"3c6944eb9b84edb09d95ddcd037b91da0062f0cd2b692277d88302e9a4deb70e\" pid:5606 exited_at:{seconds:1757998555 nanos:226159727}" Sep 16 04:55:55.315021 sshd[5628]: Connection closed by 10.0.0.1 port 38852 Sep 16 04:55:55.315707 sshd-session[5613]: pam_unix(sshd:session): session closed for user core Sep 16 04:55:55.320449 systemd-logind[1508]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:55:55.320834 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:38852.service: Deactivated successfully. Sep 16 04:55:55.324083 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:55:55.326762 systemd-logind[1508]: Removed session 20. Sep 16 04:56:00.177193 kubelet[2723]: E0916 04:56:00.177145 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:56:00.335412 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:40292.service - OpenSSH per-connection server daemon (10.0.0.1:40292). Sep 16 04:56:00.397558 sshd[5650]: Accepted publickey for core from 10.0.0.1 port 40292 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:56:00.399649 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:00.405177 systemd-logind[1508]: New session 21 of user core. Sep 16 04:56:00.420248 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:56:00.554332 sshd[5653]: Connection closed by 10.0.0.1 port 40292 Sep 16 04:56:00.555200 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:00.560323 systemd-logind[1508]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:56:00.560661 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:40292.service: Deactivated successfully. Sep 16 04:56:00.563312 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:56:00.566314 systemd-logind[1508]: Removed session 21. Sep 16 04:56:04.608952 containerd[1572]: time="2025-09-16T04:56:04.608893055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be6297ced502b028bdc476e8ed955638f8e03422d2214b4a34ea506c40346190\" id:\"212025c869e932341ed2e808a40809f45a030029026417860a7facb76edc2a51\" pid:5676 exited_at:{seconds:1757998564 nanos:608602353}" Sep 16 04:56:05.569514 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:40300.service - OpenSSH per-connection server daemon (10.0.0.1:40300). Sep 16 04:56:05.628650 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 40300 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:56:05.630544 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:05.635273 systemd-logind[1508]: New session 22 of user core. Sep 16 04:56:05.646015 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:56:05.756625 sshd[5692]: Connection closed by 10.0.0.1 port 40300 Sep 16 04:56:05.756982 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:05.761791 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:40300.service: Deactivated successfully. Sep 16 04:56:05.764100 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:56:05.765071 systemd-logind[1508]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:56:05.766229 systemd-logind[1508]: Removed session 22. Sep 16 04:56:10.774790 systemd[1]: Started sshd@22-10.0.0.73:22-10.0.0.1:53470.service - OpenSSH per-connection server daemon (10.0.0.1:53470). Sep 16 04:56:10.826978 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 53470 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:56:10.828990 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:10.833682 systemd-logind[1508]: New session 23 of user core. Sep 16 04:56:10.837017 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:56:10.960596 sshd[5708]: Connection closed by 10.0.0.1 port 53470 Sep 16 04:56:10.960984 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:10.966113 systemd[1]: sshd@22-10.0.0.73:22-10.0.0.1:53470.service: Deactivated successfully. Sep 16 04:56:10.968458 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:56:10.969585 systemd-logind[1508]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:56:10.970798 systemd-logind[1508]: Removed session 23. Sep 16 04:56:11.176494 kubelet[2723]: E0916 04:56:11.176361 2723 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 16 04:56:15.981041 systemd[1]: Started sshd@23-10.0.0.73:22-10.0.0.1:53472.service - OpenSSH per-connection server daemon (10.0.0.1:53472). Sep 16 04:56:16.039331 sshd[5721]: Accepted publickey for core from 10.0.0.1 port 53472 ssh2: RSA SHA256:FqAmbe/raJqjH84jy2s7C9vQJVEvQZjSc2lIigyvOSQ Sep 16 04:56:16.041374 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:56:16.047001 systemd-logind[1508]: New session 24 of user core. Sep 16 04:56:16.055045 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:56:16.168615 sshd[5724]: Connection closed by 10.0.0.1 port 53472 Sep 16 04:56:16.170541 sshd-session[5721]: pam_unix(sshd:session): session closed for user core Sep 16 04:56:16.175785 systemd[1]: sshd@23-10.0.0.73:22-10.0.0.1:53472.service: Deactivated successfully. Sep 16 04:56:16.180721 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:56:16.181627 systemd-logind[1508]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:56:16.183932 systemd-logind[1508]: Removed session 24.