Oct 13 05:43:06.735986 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:43:06.736009 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:43:06.736022 kernel: BIOS-provided physical RAM map: Oct 13 05:43:06.736029 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 13 05:43:06.736036 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 13 05:43:06.736043 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 13 05:43:06.736051 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 13 05:43:06.736058 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 13 05:43:06.736068 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 13 05:43:06.736075 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 13 05:43:06.736084 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 13 05:43:06.736091 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 13 05:43:06.736098 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 13 05:43:06.736105 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 13 05:43:06.736113 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 13 05:43:06.736123 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 13 05:43:06.736133 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:43:06.736140 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:43:06.736148 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:43:06.736155 kernel: NX (Execute Disable) protection: active Oct 13 05:43:06.736162 kernel: APIC: Static calls initialized Oct 13 05:43:06.736170 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Oct 13 05:43:06.736177 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Oct 13 05:43:06.736185 kernel: extended physical RAM map: Oct 13 05:43:06.736194 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 13 05:43:06.736202 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 13 05:43:06.736217 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 13 05:43:06.736224 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 13 05:43:06.736231 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Oct 13 05:43:06.736239 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Oct 13 05:43:06.736246 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Oct 13 05:43:06.736254 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Oct 13 05:43:06.736261 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Oct 13 05:43:06.736268 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 13 05:43:06.736276 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 13 05:43:06.736285 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 13 05:43:06.736312 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 13 05:43:06.736319 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 13 05:43:06.736326 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 13 05:43:06.736334 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 13 05:43:06.736346 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 13 05:43:06.736355 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:43:06.736363 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:43:06.736371 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:43:06.736379 kernel: efi: EFI v2.7 by EDK II Oct 13 05:43:06.736387 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 13 05:43:06.736394 kernel: random: crng init done Oct 13 05:43:06.736402 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 13 05:43:06.736410 kernel: secureboot: Secure boot enabled Oct 13 05:43:06.736420 kernel: SMBIOS 2.8 present. Oct 13 05:43:06.736427 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 13 05:43:06.736435 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:43:06.736443 kernel: Hypervisor detected: KVM Oct 13 05:43:06.736450 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:43:06.736458 kernel: kvm-clock: using sched offset of 6357402177 cycles Oct 13 05:43:06.736467 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:43:06.736475 kernel: tsc: Detected 2794.746 MHz processor Oct 13 05:43:06.736484 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:43:06.736494 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:43:06.736502 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 13 05:43:06.736510 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:43:06.736524 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:43:06.736532 kernel: Using GB pages for direct mapping Oct 13 05:43:06.736542 kernel: ACPI: Early table checksum verification disabled Oct 13 05:43:06.736550 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 13 05:43:06.736561 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 13 05:43:06.736569 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:43:06.736577 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:43:06.736585 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 13 05:43:06.736594 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:43:06.736602 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:43:06.736610 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:43:06.736620 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:43:06.736628 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 13 05:43:06.736636 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 13 05:43:06.736644 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 13 05:43:06.736652 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 13 05:43:06.736660 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 13 05:43:06.736668 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 13 05:43:06.736676 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 13 05:43:06.736687 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 13 05:43:06.736695 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 13 05:43:06.736703 kernel: No NUMA configuration found Oct 13 05:43:06.736711 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 13 05:43:06.736719 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 13 05:43:06.736727 kernel: Zone ranges: Oct 13 05:43:06.736735 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:43:06.736746 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 13 05:43:06.736754 kernel: Normal empty Oct 13 05:43:06.736762 kernel: Device empty Oct 13 05:43:06.736770 kernel: Movable zone start for each node Oct 13 05:43:06.736778 kernel: Early memory node ranges Oct 13 05:43:06.736786 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 13 05:43:06.736798 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 13 05:43:06.736806 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 13 05:43:06.736816 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 13 05:43:06.736824 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 13 05:43:06.736832 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 13 05:43:06.736840 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:43:06.737028 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 13 05:43:06.737041 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:43:06.737049 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 13 05:43:06.737061 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 13 05:43:06.737069 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 13 05:43:06.737077 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:43:06.737086 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:43:06.737094 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:43:06.737102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:43:06.737110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:43:06.737123 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:43:06.737132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:43:06.737140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:43:06.737148 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:43:06.737156 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:43:06.737164 kernel: TSC deadline timer available Oct 13 05:43:06.737172 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:43:06.737183 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:43:06.737191 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:43:06.737214 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:43:06.737225 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:43:06.737237 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:43:06.737253 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:43:06.737267 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:43:06.737275 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:43:06.737283 kernel: kvm-guest: setup PV sched yield Oct 13 05:43:06.737306 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 13 05:43:06.737322 kernel: Booting paravirtualized kernel on KVM Oct 13 05:43:06.737334 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:43:06.737342 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:43:06.737351 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:43:06.737362 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:43:06.737370 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:43:06.737378 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:43:06.737387 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:43:06.737397 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:43:06.737409 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:43:06.737420 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:43:06.737428 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:43:06.737437 kernel: Fallback order for Node 0: 0 Oct 13 05:43:06.737445 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 13 05:43:06.737453 kernel: Policy zone: DMA32 Oct 13 05:43:06.737462 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:43:06.737470 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:43:06.737483 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:43:06.737493 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:43:06.737502 kernel: Dynamic Preempt: voluntary Oct 13 05:43:06.737510 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:43:06.737519 kernel: rcu: RCU event tracing is enabled. Oct 13 05:43:06.737528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:43:06.737536 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:43:06.737545 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:43:06.737555 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:43:06.737564 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:43:06.737572 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:43:06.737580 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:43:06.737589 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:43:06.737600 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:43:06.737609 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:43:06.737620 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:43:06.737628 kernel: Console: colour dummy device 80x25 Oct 13 05:43:06.737636 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:43:06.737645 kernel: ACPI: Core revision 20240827 Oct 13 05:43:06.737653 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:43:06.737662 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:43:06.737670 kernel: x2apic enabled Oct 13 05:43:06.737680 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:43:06.737689 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:43:06.737697 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:43:06.737705 kernel: kvm-guest: setup PV IPIs Oct 13 05:43:06.737714 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:43:06.737722 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Oct 13 05:43:06.737731 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Oct 13 05:43:06.737741 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:43:06.737750 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:43:06.737758 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:43:06.737769 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:43:06.737777 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:43:06.737786 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:43:06.737794 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:43:06.737805 kernel: active return thunk: retbleed_return_thunk Oct 13 05:43:06.737813 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:43:06.737821 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:43:06.737830 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:43:06.737838 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:43:06.737848 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:43:06.737856 kernel: active return thunk: srso_return_thunk Oct 13 05:43:06.737866 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:43:06.737875 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:43:06.737883 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:43:06.737892 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:43:06.737900 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:43:06.737909 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:43:06.737917 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:43:06.737930 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:43:06.737941 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:43:06.737951 kernel: landlock: Up and running. Oct 13 05:43:06.737962 kernel: SELinux: Initializing. Oct 13 05:43:06.737975 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:43:06.737987 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:43:06.737998 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:43:06.738012 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:43:06.738022 kernel: ... version: 0 Oct 13 05:43:06.738036 kernel: ... bit width: 48 Oct 13 05:43:06.738046 kernel: ... generic registers: 6 Oct 13 05:43:06.738057 kernel: ... value mask: 0000ffffffffffff Oct 13 05:43:06.738067 kernel: ... max period: 00007fffffffffff Oct 13 05:43:06.738078 kernel: ... fixed-purpose events: 0 Oct 13 05:43:06.738091 kernel: ... event mask: 000000000000003f Oct 13 05:43:06.738101 kernel: signal: max sigframe size: 1776 Oct 13 05:43:06.738111 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:43:06.738121 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:43:06.738130 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:43:06.738138 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:43:06.738146 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:43:06.738157 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:43:06.738165 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:43:06.738174 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Oct 13 05:43:06.738182 kernel: Memory: 2439932K/2552216K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 106344K reserved, 0K cma-reserved) Oct 13 05:43:06.738191 kernel: devtmpfs: initialized Oct 13 05:43:06.738199 kernel: x86/mm: Memory block size: 128MB Oct 13 05:43:06.738215 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 13 05:43:06.738227 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 13 05:43:06.738235 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:43:06.738244 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:43:06.738252 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:43:06.738260 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:43:06.738268 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:43:06.738277 kernel: audit: type=2000 audit(1760334183.438:1): state=initialized audit_enabled=0 res=1 Oct 13 05:43:06.738288 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:43:06.738313 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:43:06.738329 kernel: cpuidle: using governor menu Oct 13 05:43:06.738338 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:43:06.738346 kernel: dca service started, version 1.12.1 Oct 13 05:43:06.738354 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 13 05:43:06.738363 kernel: PCI: Using configuration type 1 for base access Oct 13 05:43:06.738374 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:43:06.738383 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:43:06.738392 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:43:06.738403 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:43:06.738412 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:43:06.738420 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:43:06.738428 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:43:06.738439 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:43:06.738448 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:43:06.738456 kernel: ACPI: Interpreter enabled Oct 13 05:43:06.738464 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:43:06.738472 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:43:06.738480 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:43:06.738489 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:43:06.738497 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:43:06.738507 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:43:06.738762 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:43:06.738943 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:43:06.739141 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:43:06.739156 kernel: PCI host bridge to bus 0000:00 Oct 13 05:43:06.739442 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:43:06.739612 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:43:06.739770 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:43:06.739928 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 13 05:43:06.740114 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 13 05:43:06.740283 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:43:06.740470 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:43:06.740664 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:43:06.740852 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:43:06.741024 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 13 05:43:06.741330 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 13 05:43:06.741513 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 13 05:43:06.741685 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:43:06.741870 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:43:06.742048 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 13 05:43:06.742233 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 13 05:43:06.742435 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 13 05:43:06.742625 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:43:06.742802 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 13 05:43:06.742977 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 13 05:43:06.743151 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 13 05:43:06.743364 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:43:06.743547 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 13 05:43:06.743722 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 13 05:43:06.743895 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 13 05:43:06.744092 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 13 05:43:06.744287 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:43:06.744479 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:43:06.744674 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:43:06.744849 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 13 05:43:06.745023 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 13 05:43:06.745237 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:43:06.745513 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 13 05:43:06.745529 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:43:06.745543 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:43:06.745552 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:43:06.745561 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:43:06.745569 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:43:06.745577 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:43:06.745586 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:43:06.745594 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:43:06.745605 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:43:06.745613 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:43:06.745621 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:43:06.745630 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:43:06.745638 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:43:06.745647 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:43:06.745655 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:43:06.745665 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:43:06.745674 kernel: iommu: Default domain type: Translated Oct 13 05:43:06.745682 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:43:06.745691 kernel: efivars: Registered efivars operations Oct 13 05:43:06.745699 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:43:06.745708 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:43:06.745716 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 13 05:43:06.745727 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Oct 13 05:43:06.745735 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Oct 13 05:43:06.745743 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 13 05:43:06.745752 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 13 05:43:06.745934 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:43:06.746108 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:43:06.746312 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:43:06.746328 kernel: vgaarb: loaded Oct 13 05:43:06.746337 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:43:06.746346 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:43:06.746354 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:43:06.746362 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:43:06.746371 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:43:06.746379 kernel: pnp: PnP ACPI init Oct 13 05:43:06.746570 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 13 05:43:06.746583 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:43:06.746592 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:43:06.746600 kernel: NET: Registered PF_INET protocol family Oct 13 05:43:06.746609 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:43:06.746618 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:43:06.746626 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:43:06.746638 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:43:06.746647 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:43:06.746655 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:43:06.746664 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:43:06.746672 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:43:06.746681 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:43:06.746692 kernel: NET: Registered PF_XDP protocol family Oct 13 05:43:06.746866 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 13 05:43:06.747102 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 13 05:43:06.747312 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:43:06.747476 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:43:06.747635 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:43:06.747793 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 13 05:43:06.747960 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 13 05:43:06.748118 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:43:06.748129 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:43:06.748138 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Oct 13 05:43:06.748147 kernel: Initialise system trusted keyrings Oct 13 05:43:06.748155 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:43:06.748164 kernel: Key type asymmetric registered Oct 13 05:43:06.748176 kernel: Asymmetric key parser 'x509' registered Oct 13 05:43:06.748200 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:43:06.748222 kernel: io scheduler mq-deadline registered Oct 13 05:43:06.748231 kernel: io scheduler kyber registered Oct 13 05:43:06.748239 kernel: io scheduler bfq registered Oct 13 05:43:06.748248 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:43:06.748257 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:43:06.748269 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:43:06.748279 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:43:06.748300 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:43:06.748310 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:43:06.748319 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:43:06.748328 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:43:06.748336 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:43:06.748528 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:43:06.748541 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:43:06.748707 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:43:06.748875 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:43:04 UTC (1760334184) Oct 13 05:43:06.749043 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 13 05:43:06.749055 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:43:06.749067 kernel: efifb: probing for efifb Oct 13 05:43:06.749076 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 13 05:43:06.749085 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 13 05:43:06.749094 kernel: efifb: scrolling: redraw Oct 13 05:43:06.749102 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:43:06.749111 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 05:43:06.749120 kernel: fb0: EFI VGA frame buffer device Oct 13 05:43:06.749132 kernel: pstore: Using crash dump compression: deflate Oct 13 05:43:06.749141 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:43:06.749150 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:43:06.749159 kernel: Segment Routing with IPv6 Oct 13 05:43:06.749168 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:43:06.749179 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:43:06.749188 kernel: Key type dns_resolver registered Oct 13 05:43:06.749196 kernel: IPI shorthand broadcast: enabled Oct 13 05:43:06.749216 kernel: sched_clock: Marking stable (1831002927, 335172285)->(2293910079, -127734867) Oct 13 05:43:06.749225 kernel: registered taskstats version 1 Oct 13 05:43:06.749234 kernel: Loading compiled-in X.509 certificates Oct 13 05:43:06.749243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:43:06.749255 kernel: Demotion targets for Node 0: null Oct 13 05:43:06.749264 kernel: Key type .fscrypt registered Oct 13 05:43:06.749273 kernel: Key type fscrypt-provisioning registered Oct 13 05:43:06.749281 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:43:06.749304 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:43:06.749313 kernel: ima: No architecture policies found Oct 13 05:43:06.749321 kernel: clk: Disabling unused clocks Oct 13 05:43:06.749333 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:43:06.749342 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:43:06.749351 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:43:06.749359 kernel: Run /init as init process Oct 13 05:43:06.749371 kernel: with arguments: Oct 13 05:43:06.749379 kernel: /init Oct 13 05:43:06.749388 kernel: with environment: Oct 13 05:43:06.749400 kernel: HOME=/ Oct 13 05:43:06.749408 kernel: TERM=linux Oct 13 05:43:06.749417 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:43:06.749426 kernel: SCSI subsystem initialized Oct 13 05:43:06.749434 kernel: libata version 3.00 loaded. Oct 13 05:43:06.749617 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:43:06.749630 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:43:06.749846 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:43:06.750025 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:43:06.750202 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:43:06.750430 kernel: scsi host0: ahci Oct 13 05:43:06.750624 kernel: scsi host1: ahci Oct 13 05:43:06.750839 kernel: scsi host2: ahci Oct 13 05:43:06.751201 kernel: scsi host3: ahci Oct 13 05:43:06.751551 kernel: scsi host4: ahci Oct 13 05:43:06.751846 kernel: scsi host5: ahci Oct 13 05:43:06.751860 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 13 05:43:06.751869 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 13 05:43:06.751884 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 13 05:43:06.751893 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 13 05:43:06.751902 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 13 05:43:06.751911 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 13 05:43:06.751920 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:43:06.751929 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:43:06.751938 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:43:06.751949 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:43:06.751957 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:43:06.751966 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:43:06.751975 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:43:06.751984 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:43:06.751993 kernel: ata3.00: applying bridge limits Oct 13 05:43:06.752002 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:43:06.752010 kernel: ata3.00: configured for UDMA/100 Oct 13 05:43:06.752260 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:43:06.752472 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:43:06.752645 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 05:43:06.752658 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:43:06.752667 kernel: GPT:16515071 != 27000831 Oct 13 05:43:06.752681 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:43:06.752690 kernel: GPT:16515071 != 27000831 Oct 13 05:43:06.752698 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:43:06.752707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:43:06.752716 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.752911 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:43:06.752931 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:43:06.753152 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:43:06.753167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:43:06.753180 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:43:06.753191 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:43:06.753215 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:43:06.753228 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.753238 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.753252 kernel: raid6: avx2x4 gen() 29008 MB/s Oct 13 05:43:06.753263 kernel: raid6: avx2x2 gen() 29941 MB/s Oct 13 05:43:06.753274 kernel: raid6: avx2x1 gen() 25042 MB/s Oct 13 05:43:06.753284 kernel: raid6: using algorithm avx2x2 gen() 29941 MB/s Oct 13 05:43:06.753316 kernel: raid6: .... xor() 18901 MB/s, rmw enabled Oct 13 05:43:06.753327 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:43:06.753337 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.753347 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.753360 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.753371 kernel: xor: automatically using best checksumming function avx Oct 13 05:43:06.753382 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.753392 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:43:06.753405 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (195) Oct 13 05:43:06.753416 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:43:06.753427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:43:06.753440 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:43:06.753451 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:43:06.753461 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:43:06.753471 kernel: loop: module loaded Oct 13 05:43:06.753482 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:43:06.753492 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:43:06.753504 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:43:06.753521 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:43:06.753533 systemd[1]: Detected virtualization kvm. Oct 13 05:43:06.753543 systemd[1]: Detected architecture x86-64. Oct 13 05:43:06.753555 systemd[1]: Running in initrd. Oct 13 05:43:06.753565 systemd[1]: No hostname configured, using default hostname. Oct 13 05:43:06.753577 systemd[1]: Hostname set to . Oct 13 05:43:06.753590 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:43:06.753602 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:43:06.753612 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:43:06.753624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:43:06.753635 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:43:06.753647 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:43:06.753658 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:43:06.753673 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:43:06.753685 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:43:06.753696 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:43:06.753707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:43:06.753718 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:43:06.753732 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:43:06.753743 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:43:06.753754 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:43:06.753765 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:43:06.753776 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:43:06.753787 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:43:06.753798 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:43:06.753812 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:43:06.753823 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:43:06.753834 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:43:06.753845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:43:06.753857 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:43:06.753868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:43:06.753879 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:43:06.753893 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:43:06.753904 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:43:06.753916 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:43:06.753927 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:43:06.753938 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:43:06.753949 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:43:06.753960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:43:06.753974 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:43:06.753986 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:43:06.753997 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:43:06.754011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:43:06.754065 systemd-journald[328]: Collecting audit messages is disabled. Oct 13 05:43:06.754092 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:43:06.754107 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:43:06.754119 systemd-journald[328]: Journal started Oct 13 05:43:06.754144 systemd-journald[328]: Runtime Journal (/run/log/journal/9e209345e5ab4e408c05ee826d885a47) is 6M, max 48.2M, 42.2M free. Oct 13 05:43:06.758496 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:43:06.764474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:43:06.765354 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:43:06.775195 kernel: Bridge firewalling registered Oct 13 05:43:06.772784 systemd-modules-load[329]: Inserted module 'br_netfilter' Oct 13 05:43:06.772839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:43:06.787457 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:43:06.792127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:43:06.796441 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:43:06.802472 systemd-tmpfiles[348]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:43:06.811677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:43:06.817156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:43:06.821939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:43:06.823807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:43:06.846425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:43:06.847767 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:43:06.889027 dracut-cmdline[374]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:43:06.901795 systemd-resolved[360]: Positive Trust Anchors: Oct 13 05:43:06.901814 systemd-resolved[360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:43:06.901820 systemd-resolved[360]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:43:06.901861 systemd-resolved[360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:43:06.927684 systemd-resolved[360]: Defaulting to hostname 'linux'. Oct 13 05:43:06.929917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:43:06.932191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:43:07.072340 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:43:07.088342 kernel: iscsi: registered transport (tcp) Oct 13 05:43:07.150339 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:43:07.150460 kernel: QLogic iSCSI HBA Driver Oct 13 05:43:07.178266 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:43:07.202024 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:43:07.202740 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:43:07.266244 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:43:07.268659 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:43:07.272423 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:43:07.320540 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:43:07.324966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:43:07.362777 systemd-udevd[611]: Using default interface naming scheme 'v257'. Oct 13 05:43:07.378835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:43:07.385352 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:43:07.407795 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:43:07.412975 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:43:07.420979 dracut-pre-trigger[694]: rd.md=0: removing MD RAID activation Oct 13 05:43:07.457025 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:43:07.458600 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:43:07.476656 systemd-networkd[718]: lo: Link UP Oct 13 05:43:07.476666 systemd-networkd[718]: lo: Gained carrier Oct 13 05:43:07.477357 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:43:07.479847 systemd[1]: Reached target network.target - Network. Oct 13 05:43:07.561378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:43:07.565458 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:43:07.653072 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:43:07.665332 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:43:07.667028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:43:07.677142 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:43:07.684750 kernel: AES CTR mode by8 optimization enabled Oct 13 05:43:07.692318 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:43:07.694763 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:43:07.709123 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:43:07.719161 systemd-networkd[718]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:43:07.719175 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:43:07.720606 systemd-networkd[718]: eth0: Link UP Oct 13 05:43:07.720916 systemd-networkd[718]: eth0: Gained carrier Oct 13 05:43:07.720925 systemd-networkd[718]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:43:07.740419 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:43:07.741812 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:43:07.741931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:43:07.743439 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:43:07.752268 disk-uuid[841]: Primary Header is updated. Oct 13 05:43:07.752268 disk-uuid[841]: Secondary Entries is updated. Oct 13 05:43:07.752268 disk-uuid[841]: Secondary Header is updated. Oct 13 05:43:07.746464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:43:07.806335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:43:07.839374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:43:07.843236 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:43:07.843371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:43:07.847636 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:43:07.855506 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:43:07.897282 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:43:08.895184 disk-uuid[848]: Warning: The kernel is still using the old partition table. Oct 13 05:43:08.895184 disk-uuid[848]: The new table will be used at the next reboot or after you Oct 13 05:43:08.895184 disk-uuid[848]: run partprobe(8) or kpartx(8) Oct 13 05:43:08.895184 disk-uuid[848]: The operation has completed successfully. Oct 13 05:43:08.966137 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:43:08.966328 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:43:08.971684 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:43:09.002508 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (880) Oct 13 05:43:09.002571 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:43:09.002598 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:43:09.007591 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:43:09.007616 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:43:09.015319 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:43:09.015765 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:43:09.017135 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:43:09.224442 ignition[899]: Ignition 2.22.0 Oct 13 05:43:09.224456 ignition[899]: Stage: fetch-offline Oct 13 05:43:09.224512 ignition[899]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:43:09.224525 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:43:09.224652 ignition[899]: parsed url from cmdline: "" Oct 13 05:43:09.224656 ignition[899]: no config URL provided Oct 13 05:43:09.224661 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:43:09.224673 ignition[899]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:43:09.224718 ignition[899]: op(1): [started] loading QEMU firmware config module Oct 13 05:43:09.224723 ignition[899]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:43:09.239377 ignition[899]: op(1): [finished] loading QEMU firmware config module Oct 13 05:43:09.325085 ignition[899]: parsing config with SHA512: d0e3055b6f328078a15d5a5a09b1167fe6f2fc4164cb75189e51be3ac6474a1e7e073a69de41f6995bf9d65886c2d9f01076348d5ca137f78f550e18ad8f5268 Oct 13 05:43:09.337275 unknown[899]: fetched base config from "system" Oct 13 05:43:09.337317 unknown[899]: fetched user config from "qemu" Oct 13 05:43:09.337862 ignition[899]: fetch-offline: fetch-offline passed Oct 13 05:43:09.341896 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:43:09.337964 ignition[899]: Ignition finished successfully Oct 13 05:43:09.345363 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:43:09.346750 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:43:09.441013 ignition[912]: Ignition 2.22.0 Oct 13 05:43:09.441030 ignition[912]: Stage: kargs Oct 13 05:43:09.441216 ignition[912]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:43:09.441229 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:43:09.442016 ignition[912]: kargs: kargs passed Oct 13 05:43:09.445977 systemd-networkd[718]: eth0: Gained IPv6LL Oct 13 05:43:09.442064 ignition[912]: Ignition finished successfully Oct 13 05:43:09.448022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:43:09.452815 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:43:09.501917 ignition[920]: Ignition 2.22.0 Oct 13 05:43:09.501934 ignition[920]: Stage: disks Oct 13 05:43:09.502113 ignition[920]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:43:09.502125 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:43:09.506574 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:43:09.502922 ignition[920]: disks: disks passed Oct 13 05:43:09.510865 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:43:09.502975 ignition[920]: Ignition finished successfully Oct 13 05:43:09.514618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:43:09.517040 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:43:09.520187 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:43:09.524485 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:43:09.526119 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:43:09.576178 systemd-fsck[930]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 05:43:09.667620 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:43:09.674174 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:43:09.807321 kernel: EXT4-fs (vda9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:43:09.807997 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:43:09.808680 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:43:09.814871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:43:09.815982 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:43:09.819384 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:43:09.819425 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:43:09.819451 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:43:09.837775 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:43:09.840682 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:43:09.849688 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (939) Oct 13 05:43:09.849717 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:43:09.849743 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:43:09.853064 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:43:09.853095 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:43:09.855265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:43:09.902063 initrd-setup-root[963]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:43:09.907364 initrd-setup-root[970]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:43:09.913907 initrd-setup-root[977]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:43:09.920193 initrd-setup-root[984]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:43:10.028782 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:43:10.032838 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:43:10.036201 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:43:10.058799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:43:10.061415 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:43:10.077624 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:43:10.180031 ignition[1053]: INFO : Ignition 2.22.0 Oct 13 05:43:10.180031 ignition[1053]: INFO : Stage: mount Oct 13 05:43:10.183455 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:43:10.183455 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:43:10.183455 ignition[1053]: INFO : mount: mount passed Oct 13 05:43:10.183455 ignition[1053]: INFO : Ignition finished successfully Oct 13 05:43:10.185719 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:43:10.191010 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:43:10.214794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:43:10.235337 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1065) Oct 13 05:43:10.238622 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:43:10.238647 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:43:10.242624 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:43:10.242687 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:43:10.244460 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:43:10.381470 ignition[1082]: INFO : Ignition 2.22.0 Oct 13 05:43:10.381470 ignition[1082]: INFO : Stage: files Oct 13 05:43:10.386853 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:43:10.386853 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:43:10.391846 ignition[1082]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:43:10.393924 ignition[1082]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:43:10.393924 ignition[1082]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:43:10.399652 ignition[1082]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:43:10.402304 ignition[1082]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:43:10.405070 unknown[1082]: wrote ssh authorized keys file for user: core Oct 13 05:43:10.406992 ignition[1082]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:43:10.409478 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 13 05:43:10.409478 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 13 05:43:10.672905 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:43:10.770535 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:43:10.773999 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:43:10.906801 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:43:10.926172 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:43:10.926172 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:43:11.124571 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:43:11.124571 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:43:11.144957 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 13 05:43:11.430399 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:43:12.101823 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:43:12.101823 ignition[1082]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:43:12.122092 ignition[1082]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:43:12.311728 ignition[1082]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:43:12.311728 ignition[1082]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:43:12.311728 ignition[1082]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:43:12.311728 ignition[1082]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:43:12.323202 ignition[1082]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:43:12.323202 ignition[1082]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:43:12.323202 ignition[1082]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:43:12.339337 ignition[1082]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:43:12.348232 ignition[1082]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:43:12.350982 ignition[1082]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:43:12.350982 ignition[1082]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:43:12.350982 ignition[1082]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:43:12.350982 ignition[1082]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:43:12.350982 ignition[1082]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:43:12.350982 ignition[1082]: INFO : files: files passed Oct 13 05:43:12.350982 ignition[1082]: INFO : Ignition finished successfully Oct 13 05:43:12.360922 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:43:12.368580 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:43:12.372973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:43:12.391511 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:43:12.391665 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:43:12.397021 initrd-setup-root-after-ignition[1111]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:43:12.401342 initrd-setup-root-after-ignition[1113]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:43:12.401342 initrd-setup-root-after-ignition[1113]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:43:12.407543 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:43:12.412180 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:43:12.412584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:43:12.413980 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:43:12.484828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:43:12.484969 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:43:12.486839 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:43:12.492070 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:43:12.497266 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:43:12.498524 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:43:12.542754 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:43:12.545080 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:43:12.578313 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:43:12.578570 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:43:12.582453 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:43:12.586390 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:43:12.588451 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:43:12.588701 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:43:12.596996 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:43:12.598781 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:43:12.600400 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:43:12.604680 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:43:12.606709 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:43:12.609934 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:43:12.610751 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:43:12.656464 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:43:12.658084 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:43:12.661864 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:43:12.664975 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:43:12.669400 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:43:12.669545 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:43:12.675608 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:43:12.675769 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:43:12.685795 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:43:12.689414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:43:12.689786 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:43:12.689900 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:43:12.695511 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:43:12.695629 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:43:12.701388 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:43:12.702967 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:43:12.709392 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:43:12.709563 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:43:12.713779 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:43:12.720510 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:43:12.720604 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:43:12.726541 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:43:12.726628 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:43:12.729586 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:43:12.729745 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:43:12.733073 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:43:12.733189 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:43:12.739462 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:43:12.742190 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:43:12.744648 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:43:12.744796 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:43:12.753107 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:43:12.753343 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:43:12.756483 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:43:12.756685 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:43:12.771254 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:43:12.771560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:43:12.803838 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:43:12.808311 ignition[1138]: INFO : Ignition 2.22.0 Oct 13 05:43:12.808311 ignition[1138]: INFO : Stage: umount Oct 13 05:43:12.811793 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:43:12.811793 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:43:12.811793 ignition[1138]: INFO : umount: umount passed Oct 13 05:43:12.811793 ignition[1138]: INFO : Ignition finished successfully Oct 13 05:43:12.820186 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:43:12.820377 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:43:12.822190 systemd[1]: Stopped target network.target - Network. Oct 13 05:43:12.826504 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:43:12.826585 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:43:12.827918 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:43:12.827983 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:43:12.828825 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:43:12.828881 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:43:12.834144 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:43:12.834208 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:43:12.838871 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:43:12.842213 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:43:12.857522 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:43:12.857719 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:43:12.864831 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:43:12.864975 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:43:12.872980 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:43:12.873224 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:43:12.873311 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:43:12.874977 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:43:12.880220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:43:12.880387 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:43:12.885494 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:43:12.885576 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:43:12.887138 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:43:12.887204 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:43:12.892535 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:43:12.895281 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:43:12.895429 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:43:12.900572 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:43:12.900729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:43:12.921994 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:43:12.925492 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:43:12.926017 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:43:12.926082 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:43:12.932163 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:43:12.932223 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:43:12.935575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:43:12.935664 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:43:12.939691 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:43:12.939759 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:43:12.945825 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:43:12.945901 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:43:12.952752 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:43:12.953405 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:43:12.953483 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:43:12.957380 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:43:12.957473 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:43:12.958218 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:43:12.958271 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:43:12.967954 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:43:12.968042 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:43:12.971850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:43:12.971937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:43:12.977151 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:43:12.977314 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:43:12.980645 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:43:12.980770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:43:12.986626 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:43:12.990324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:43:13.019913 systemd[1]: Switching root. Oct 13 05:43:13.058629 systemd-journald[328]: Journal stopped Oct 13 05:43:15.026514 systemd-journald[328]: Received SIGTERM from PID 1 (systemd). Oct 13 05:43:15.026622 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:43:15.026639 kernel: SELinux: policy capability open_perms=1 Oct 13 05:43:15.026652 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:43:15.026665 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:43:15.026689 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:43:15.026704 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:43:15.026727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:43:15.026742 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:43:15.026757 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:43:15.026773 kernel: audit: type=1403 audit(1760334193.963:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:43:15.026795 systemd[1]: Successfully loaded SELinux policy in 70.375ms. Oct 13 05:43:15.026832 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.666ms. Oct 13 05:43:15.026850 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:43:15.026868 systemd[1]: Detected virtualization kvm. Oct 13 05:43:15.026883 systemd[1]: Detected architecture x86-64. Oct 13 05:43:15.026899 systemd[1]: Detected first boot. Oct 13 05:43:15.026913 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:43:15.026926 zram_generator::config[1184]: No configuration found. Oct 13 05:43:15.026950 kernel: Guest personality initialized and is inactive Oct 13 05:43:15.026963 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:43:15.026975 kernel: Initialized host personality Oct 13 05:43:15.026987 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:43:15.027008 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:43:15.027021 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:43:15.027038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:43:15.027057 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:43:15.027073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:43:15.027091 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:43:15.027108 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:43:15.027127 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:43:15.027141 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:43:15.027154 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:43:15.027171 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:43:15.027184 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:43:15.027198 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:43:15.027211 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:43:15.027224 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:43:15.027237 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:43:15.027249 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:43:15.027266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:43:15.027279 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:43:15.027306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:43:15.027320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:43:15.027333 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:43:15.027346 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:43:15.027362 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:43:15.027375 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:43:15.027388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:43:15.027401 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:43:15.027414 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:43:15.027428 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:43:15.027441 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:43:15.027459 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:43:15.027472 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:43:15.027485 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:43:15.027498 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:43:15.027511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:43:15.027524 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:43:15.027537 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:43:15.027559 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:43:15.027575 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:43:15.027588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:15.027601 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:43:15.027614 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:43:15.027626 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:43:15.027641 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:43:15.027660 systemd[1]: Reached target machines.target - Containers. Oct 13 05:43:15.027673 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:43:15.027686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:43:15.027699 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:43:15.027713 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:43:15.027726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:43:15.027739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:43:15.027757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:43:15.027770 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:43:15.027783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:43:15.027797 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:43:15.027812 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:43:15.027825 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:43:15.027841 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:43:15.027854 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:43:15.027867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:43:15.027880 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:43:15.027893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:43:15.027928 systemd-journald[1248]: Collecting audit messages is disabled. Oct 13 05:43:15.027955 kernel: fuse: init (API version 7.41) Oct 13 05:43:15.027971 systemd-journald[1248]: Journal started Oct 13 05:43:15.028002 systemd-journald[1248]: Runtime Journal (/run/log/journal/9e209345e5ab4e408c05ee826d885a47) is 6M, max 48.2M, 42.2M free. Oct 13 05:43:14.549693 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:43:14.572389 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:43:14.572923 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:43:15.032317 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:43:15.060455 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:43:15.065319 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:43:15.072981 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:43:15.076339 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:15.078489 kernel: ACPI: bus type drm_connector registered Oct 13 05:43:15.082229 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:43:15.083646 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:43:15.085600 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:43:15.087617 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:43:15.089580 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:43:15.091631 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:43:15.093717 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:43:15.108699 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:43:15.111267 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:43:15.111515 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:43:15.114114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:43:15.114352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:43:15.116687 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:43:15.116919 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:43:15.119086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:43:15.119326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:43:15.121789 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:43:15.122061 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:43:15.124371 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:43:15.124633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:43:15.126846 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:43:15.129181 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:43:15.179705 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:43:15.182486 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:43:15.191879 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:43:15.208068 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:43:15.296384 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:43:15.300115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:43:15.326704 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:43:15.328592 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:43:15.328628 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:43:15.331432 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:43:15.333682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:43:15.339844 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:43:15.343104 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:43:15.345170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:43:15.346432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:43:15.365540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:43:15.366890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:43:15.370156 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:43:15.373270 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:43:15.377704 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:43:15.379968 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:43:15.437642 systemd-journald[1248]: Time spent on flushing to /var/log/journal/9e209345e5ab4e408c05ee826d885a47 is 22.117ms for 1034 entries. Oct 13 05:43:15.437642 systemd-journald[1248]: System Journal (/var/log/journal/9e209345e5ab4e408c05ee826d885a47) is 8M, max 163.5M, 155.5M free. Oct 13 05:43:15.499946 systemd-journald[1248]: Received client request to flush runtime journal. Oct 13 05:43:15.500016 kernel: loop1: detected capacity change from 0 to 128048 Oct 13 05:43:15.500057 kernel: loop2: detected capacity change from 0 to 110984 Oct 13 05:43:15.443669 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:43:15.450680 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Oct 13 05:43:15.450697 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Oct 13 05:43:15.451479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:43:15.457360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:43:15.461698 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:43:15.464209 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:43:15.469447 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:43:15.474833 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:43:15.502676 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:43:15.516370 kernel: loop3: detected capacity change from 0 to 224512 Oct 13 05:43:15.517603 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:43:15.520393 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:43:15.526236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:43:15.529107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:43:15.544341 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:43:15.550347 kernel: loop4: detected capacity change from 0 to 128048 Oct 13 05:43:15.560788 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:43:15.561207 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Oct 13 05:43:15.564321 kernel: loop5: detected capacity change from 0 to 110984 Oct 13 05:43:15.568468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:43:15.574825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:43:15.577322 kernel: loop6: detected capacity change from 0 to 224512 Oct 13 05:43:15.583546 (sd-merge)[1327]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 05:43:15.587584 (sd-merge)[1327]: Merged extensions into '/usr'. Oct 13 05:43:15.592810 systemd[1]: Reload requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:43:15.592836 systemd[1]: Reloading... Oct 13 05:43:15.660365 zram_generator::config[1365]: No configuration found. Oct 13 05:43:15.704731 systemd-resolved[1324]: Positive Trust Anchors: Oct 13 05:43:15.704751 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:43:15.704756 systemd-resolved[1324]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:43:15.704787 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:43:15.710087 systemd-resolved[1324]: Defaulting to hostname 'linux'. Oct 13 05:43:15.856941 systemd[1]: Reloading finished in 263 ms. Oct 13 05:43:15.879675 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:43:15.881866 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:43:15.884056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:43:15.888772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:43:15.909036 systemd[1]: Starting ensure-sysext.service... Oct 13 05:43:15.911610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:43:15.927434 systemd[1]: Reload requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:43:15.927452 systemd[1]: Reloading... Oct 13 05:43:15.935038 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:43:15.935077 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:43:15.935455 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:43:15.935742 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:43:15.936769 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:43:15.937089 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Oct 13 05:43:15.937167 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Oct 13 05:43:15.943267 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:43:15.943281 systemd-tmpfiles[1399]: Skipping /boot Oct 13 05:43:15.957784 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:43:15.957940 systemd-tmpfiles[1399]: Skipping /boot Oct 13 05:43:15.985337 zram_generator::config[1429]: No configuration found. Oct 13 05:43:16.232201 systemd[1]: Reloading finished in 304 ms. Oct 13 05:43:16.246700 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:43:16.276539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:43:16.288769 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:43:16.291711 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:43:16.304531 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:43:16.308549 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:43:16.312077 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:43:16.315380 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:43:16.320771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:16.320942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:43:16.325529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:43:16.331638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:43:16.337174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:43:16.339073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:43:16.339186 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:43:16.339281 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:16.341836 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:43:16.342232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:43:16.359051 systemd-udevd[1473]: Using default interface naming scheme 'v257'. Oct 13 05:43:16.359061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:16.360980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:43:16.366242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:43:16.370817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:43:16.371389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:43:16.371509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:16.373478 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:43:16.377324 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:43:16.380121 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:43:16.380396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:43:16.384055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:43:16.384338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:43:16.392586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:16.392813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:43:16.394727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:43:16.398604 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:43:16.404348 augenrules[1504]: No rules Oct 13 05:43:16.406804 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:43:16.409059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:43:16.409223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:43:16.409512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:43:16.411071 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:43:16.411886 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:43:16.415912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:43:16.424511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:43:16.428445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:43:16.432727 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:43:16.433458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:43:16.436091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:43:16.437622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:43:16.440208 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:43:16.440607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:43:16.451491 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:43:16.455829 systemd[1]: Finished ensure-sysext.service. Oct 13 05:43:16.478549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:43:16.480757 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:43:16.480831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:43:16.484656 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:43:16.486876 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:43:16.522059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:43:16.526856 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:43:16.556693 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:43:16.559578 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:43:16.614327 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:43:16.616021 systemd-networkd[1533]: lo: Link UP Oct 13 05:43:16.616274 systemd-networkd[1533]: lo: Gained carrier Oct 13 05:43:16.623389 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:43:16.619611 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:43:16.620392 systemd-networkd[1533]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:43:16.620398 systemd-networkd[1533]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:43:16.621196 systemd-networkd[1533]: eth0: Link UP Oct 13 05:43:16.621440 systemd-networkd[1533]: eth0: Gained carrier Oct 13 05:43:16.621454 systemd-networkd[1533]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:43:16.623508 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:43:16.625410 systemd[1]: Reached target network.target - Network. Oct 13 05:43:16.628134 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:43:16.635604 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:43:16.633448 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:43:16.636942 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:43:16.647860 systemd-networkd[1533]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:43:16.649750 systemd-timesyncd[1535]: Network configuration changed, trying to establish connection. Oct 13 05:43:17.569110 systemd-timesyncd[1535]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:43:17.569172 systemd-timesyncd[1535]: Initial clock synchronization to Mon 2025-10-13 05:43:17.568975 UTC. Oct 13 05:43:17.572143 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 13 05:43:17.572106 systemd-resolved[1324]: Clock change detected. Flushing caches. Oct 13 05:43:17.576106 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:43:17.576375 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:43:17.603875 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:43:17.714149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:43:17.753320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:43:17.753774 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:43:17.765260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:43:17.784147 kernel: kvm_amd: TSC scaling supported Oct 13 05:43:17.784205 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:43:17.784255 kernel: kvm_amd: Nested Paging enabled Oct 13 05:43:17.785737 kernel: kvm_amd: LBR virtualization supported Oct 13 05:43:17.785762 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:43:17.786788 kernel: kvm_amd: Virtual GIF supported Oct 13 05:43:17.827954 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:43:17.861880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:43:17.873563 ldconfig[1470]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:43:17.880837 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:43:17.884542 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:43:17.921014 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:43:17.923178 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:43:17.925016 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:43:17.927106 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:43:17.929285 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:43:17.931364 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:43:17.933323 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:43:17.935359 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:43:17.937485 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:43:17.937520 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:43:17.939069 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:43:17.941962 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:43:17.945832 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:43:17.949742 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:43:17.952048 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:43:17.954136 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:43:17.959004 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:43:17.961198 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:43:17.963799 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:43:17.966281 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:43:17.967849 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:43:17.969390 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:43:17.969421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:43:17.970655 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:43:17.973519 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:43:17.976233 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:43:17.980065 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:43:17.983281 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:43:17.984966 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:43:17.986616 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:43:17.990285 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:43:17.990603 jq[1593]: false Oct 13 05:43:17.994642 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:43:17.998906 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Refreshing passwd entry cache Oct 13 05:43:17.999226 oslogin_cache_refresh[1595]: Refreshing passwd entry cache Oct 13 05:43:18.001618 extend-filesystems[1594]: Found /dev/vda6 Oct 13 05:43:18.004776 extend-filesystems[1594]: Found /dev/vda9 Oct 13 05:43:18.007861 extend-filesystems[1594]: Checking size of /dev/vda9 Oct 13 05:43:18.009663 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:43:18.012715 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Failure getting users, quitting Oct 13 05:43:18.012715 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:43:18.012715 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Refreshing group entry cache Oct 13 05:43:18.012078 oslogin_cache_refresh[1595]: Failure getting users, quitting Oct 13 05:43:18.012103 oslogin_cache_refresh[1595]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:43:18.012183 oslogin_cache_refresh[1595]: Refreshing group entry cache Oct 13 05:43:18.014193 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:43:18.019628 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:43:18.022318 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Failure getting groups, quitting Oct 13 05:43:18.022318 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:43:18.021408 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:43:18.021375 oslogin_cache_refresh[1595]: Failure getting groups, quitting Oct 13 05:43:18.021866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:43:18.021386 oslogin_cache_refresh[1595]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:43:18.022728 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:43:18.024090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:43:18.027312 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:43:18.029753 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:43:18.030047 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:43:18.030363 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:43:18.030635 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:43:18.034658 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:43:18.034913 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:43:18.037356 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:43:18.037621 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:43:18.050173 update_engine[1610]: I20251013 05:43:18.049638 1610 main.cc:92] Flatcar Update Engine starting Oct 13 05:43:18.050459 jq[1611]: true Oct 13 05:43:18.055654 (ntainerd)[1633]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:43:18.062855 tar[1616]: linux-amd64/LICENSE Oct 13 05:43:18.065940 tar[1616]: linux-amd64/helm Oct 13 05:43:18.065988 extend-filesystems[1594]: Resized partition /dev/vda9 Oct 13 05:43:18.078860 jq[1634]: true Oct 13 05:43:18.143386 extend-filesystems[1658]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:43:18.165315 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:43:18.165345 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:43:18.166548 systemd-logind[1609]: New seat seat0. Oct 13 05:43:18.171911 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:43:18.221153 dbus-daemon[1591]: [system] SELinux support is enabled Oct 13 05:43:18.229481 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 05:43:18.221393 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:43:18.225530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:43:18.225557 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:43:18.229866 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:43:18.229967 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:43:18.231980 update_engine[1610]: I20251013 05:43:18.231722 1610 update_check_scheduler.cc:74] Next update check in 9m2s Oct 13 05:43:18.237154 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:43:18.237417 dbus-daemon[1591]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:43:18.241147 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:43:18.521338 locksmithd[1659]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:43:18.529958 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 05:43:18.554030 extend-filesystems[1658]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:43:18.554030 extend-filesystems[1658]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:43:18.554030 extend-filesystems[1658]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 05:43:18.558858 extend-filesystems[1594]: Resized filesystem in /dev/vda9 Oct 13 05:43:18.555377 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:43:18.568575 bash[1657]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:43:18.555691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:43:18.564680 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:43:18.571179 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:43:18.591590 tar[1616]: linux-amd64/README.md Oct 13 05:43:18.603202 sshd_keygen[1630]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:43:18.613238 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:43:18.632183 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:43:18.636073 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:43:18.667574 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:43:18.667912 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:43:18.671949 containerd[1633]: time="2025-10-13T05:43:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:43:18.671961 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:43:18.672824 containerd[1633]: time="2025-10-13T05:43:18.672782177Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:43:18.684892 containerd[1633]: time="2025-10-13T05:43:18.684851911Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.411µs" Oct 13 05:43:18.685008 containerd[1633]: time="2025-10-13T05:43:18.684984841Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:43:18.685083 containerd[1633]: time="2025-10-13T05:43:18.685065883Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:43:18.685352 containerd[1633]: time="2025-10-13T05:43:18.685328054Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:43:18.685455 containerd[1633]: time="2025-10-13T05:43:18.685434564Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:43:18.685563 containerd[1633]: time="2025-10-13T05:43:18.685530264Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:43:18.685720 containerd[1633]: time="2025-10-13T05:43:18.685695133Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:43:18.685782 containerd[1633]: time="2025-10-13T05:43:18.685767599Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686148 containerd[1633]: time="2025-10-13T05:43:18.686119549Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686212834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686235607Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686246958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686366613Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686687234Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686729444Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686743239Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:43:18.686854 containerd[1633]: time="2025-10-13T05:43:18.686786881Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:43:18.687245 containerd[1633]: time="2025-10-13T05:43:18.687132349Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:43:18.687356 containerd[1633]: time="2025-10-13T05:43:18.687323248Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:43:18.693473 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:43:18.695663 containerd[1633]: time="2025-10-13T05:43:18.695601528Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:43:18.695739 containerd[1633]: time="2025-10-13T05:43:18.695692459Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:43:18.695739 containerd[1633]: time="2025-10-13T05:43:18.695712837Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:43:18.695739 containerd[1633]: time="2025-10-13T05:43:18.695733406Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:43:18.695814 containerd[1633]: time="2025-10-13T05:43:18.695746660Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:43:18.695814 containerd[1633]: time="2025-10-13T05:43:18.695758032Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:43:18.695814 containerd[1633]: time="2025-10-13T05:43:18.695769814Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:43:18.695814 containerd[1633]: time="2025-10-13T05:43:18.695781997Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:43:18.695814 containerd[1633]: time="2025-10-13T05:43:18.695796133Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:43:18.695814 containerd[1633]: time="2025-10-13T05:43:18.695807034Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:43:18.695814 containerd[1633]: time="2025-10-13T05:43:18.695817463Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:43:18.696015 containerd[1633]: time="2025-10-13T05:43:18.695832912Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:43:18.696015 containerd[1633]: time="2025-10-13T05:43:18.696008171Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:43:18.696066 containerd[1633]: time="2025-10-13T05:43:18.696029621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:43:18.696066 containerd[1633]: time="2025-10-13T05:43:18.696046222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:43:18.696066 containerd[1633]: time="2025-10-13T05:43:18.696057043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:43:18.696066 containerd[1633]: time="2025-10-13T05:43:18.696067402Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:43:18.696178 containerd[1633]: time="2025-10-13T05:43:18.696080336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:43:18.696178 containerd[1633]: time="2025-10-13T05:43:18.696091558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:43:18.696178 containerd[1633]: time="2025-10-13T05:43:18.696101947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:43:18.696178 containerd[1633]: time="2025-10-13T05:43:18.696112316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:43:18.696178 containerd[1633]: time="2025-10-13T05:43:18.696122836Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:43:18.696178 containerd[1633]: time="2025-10-13T05:43:18.696134498Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:43:18.696329 containerd[1633]: time="2025-10-13T05:43:18.696203748Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:43:18.696329 containerd[1633]: time="2025-10-13T05:43:18.696223645Z" level=info msg="Start snapshots syncer" Oct 13 05:43:18.696329 containerd[1633]: time="2025-10-13T05:43:18.696250586Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:43:18.696574 containerd[1633]: time="2025-10-13T05:43:18.696475077Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:43:18.696574 containerd[1633]: time="2025-10-13T05:43:18.696542504Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:43:18.696744 containerd[1633]: time="2025-10-13T05:43:18.696618616Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:43:18.696744 containerd[1633]: time="2025-10-13T05:43:18.696729254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:43:18.696810 containerd[1633]: time="2025-10-13T05:43:18.696749682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:43:18.696810 containerd[1633]: time="2025-10-13T05:43:18.696763408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:43:18.696810 containerd[1633]: time="2025-10-13T05:43:18.696774038Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:43:18.696810 containerd[1633]: time="2025-10-13T05:43:18.696786231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:43:18.696810 containerd[1633]: time="2025-10-13T05:43:18.696797071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:43:18.696810 containerd[1633]: time="2025-10-13T05:43:18.696807851Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:43:18.696979 containerd[1633]: time="2025-10-13T05:43:18.696829181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:43:18.696979 containerd[1633]: time="2025-10-13T05:43:18.696841244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:43:18.696979 containerd[1633]: time="2025-10-13T05:43:18.696852906Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:43:18.696979 containerd[1633]: time="2025-10-13T05:43:18.696885948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:43:18.696979 containerd[1633]: time="2025-10-13T05:43:18.696899744Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:43:18.696979 containerd[1633]: time="2025-10-13T05:43:18.696908711Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697057931Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697072147Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697088618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697101933Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697119626Z" level=info msg="runtime interface created" Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697125567Z" level=info msg="created NRI interface" Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697134785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:43:18.697147 containerd[1633]: time="2025-10-13T05:43:18.697145034Z" level=info msg="Connect containerd service" Oct 13 05:43:18.697338 containerd[1633]: time="2025-10-13T05:43:18.697168358Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:43:18.697570 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:43:18.698025 containerd[1633]: time="2025-10-13T05:43:18.697960754Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:43:18.700737 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:43:18.704912 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:43:18.815967 containerd[1633]: time="2025-10-13T05:43:18.815809116Z" level=info msg="Start subscribing containerd event" Oct 13 05:43:18.815967 containerd[1633]: time="2025-10-13T05:43:18.815872555Z" level=info msg="Start recovering state" Oct 13 05:43:18.816130 containerd[1633]: time="2025-10-13T05:43:18.816028498Z" level=info msg="Start event monitor" Oct 13 05:43:18.816130 containerd[1633]: time="2025-10-13T05:43:18.816058935Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:43:18.816130 containerd[1633]: time="2025-10-13T05:43:18.816069385Z" level=info msg="Start streaming server" Oct 13 05:43:18.816130 containerd[1633]: time="2025-10-13T05:43:18.816080525Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:43:18.816130 containerd[1633]: time="2025-10-13T05:43:18.816090274Z" level=info msg="runtime interface starting up..." Oct 13 05:43:18.816130 containerd[1633]: time="2025-10-13T05:43:18.816104410Z" level=info msg="starting plugins..." Oct 13 05:43:18.816130 containerd[1633]: time="2025-10-13T05:43:18.816125420Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:43:18.816300 containerd[1633]: time="2025-10-13T05:43:18.816250795Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:43:18.816374 containerd[1633]: time="2025-10-13T05:43:18.816329923Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:43:18.816601 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:43:18.818675 containerd[1633]: time="2025-10-13T05:43:18.818618447Z" level=info msg="containerd successfully booted in 0.147606s" Oct 13 05:43:19.086723 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:43:19.090007 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:52420.service - OpenSSH per-connection server daemon (10.0.0.1:52420). Oct 13 05:43:19.179817 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 52420 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:19.182161 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:19.195983 systemd-logind[1609]: New session 1 of user core. Oct 13 05:43:19.197613 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:43:19.201505 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:43:19.239840 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:43:19.244871 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:43:19.255057 systemd-networkd[1533]: eth0: Gained IPv6LL Oct 13 05:43:19.261137 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:43:19.263865 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:43:19.264135 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:43:19.267410 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:43:19.270557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:43:19.273995 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:43:19.278702 systemd-logind[1609]: New session c1 of user core. Oct 13 05:43:19.302498 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:43:19.305175 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:43:19.305578 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:43:19.308473 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:43:19.415866 systemd[1715]: Queued start job for default target default.target. Oct 13 05:43:19.431345 systemd[1715]: Created slice app.slice - User Application Slice. Oct 13 05:43:19.431376 systemd[1715]: Reached target paths.target - Paths. Oct 13 05:43:19.431432 systemd[1715]: Reached target timers.target - Timers. Oct 13 05:43:19.433064 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:43:19.445033 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:43:19.445150 systemd[1715]: Reached target sockets.target - Sockets. Oct 13 05:43:19.445187 systemd[1715]: Reached target basic.target - Basic System. Oct 13 05:43:19.445240 systemd[1715]: Reached target default.target - Main User Target. Oct 13 05:43:19.445281 systemd[1715]: Startup finished in 157ms. Oct 13 05:43:19.445484 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:43:19.449005 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:43:19.517751 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Oct 13 05:43:19.580148 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:19.582118 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:19.586942 systemd-logind[1609]: New session 2 of user core. Oct 13 05:43:19.599093 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:43:19.656075 sshd[1747]: Connection closed by 10.0.0.1 port 52426 Oct 13 05:43:19.656449 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Oct 13 05:43:19.672605 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:52426.service: Deactivated successfully. Oct 13 05:43:19.674632 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:43:19.675386 systemd-logind[1609]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:43:19.678181 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:52440.service - OpenSSH per-connection server daemon (10.0.0.1:52440). Oct 13 05:43:19.681722 systemd-logind[1609]: Removed session 2. Oct 13 05:43:19.732957 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 52440 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:19.734338 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:19.739182 systemd-logind[1609]: New session 3 of user core. Oct 13 05:43:19.753094 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:43:19.808942 sshd[1756]: Connection closed by 10.0.0.1 port 52440 Oct 13 05:43:19.809236 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Oct 13 05:43:19.813178 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:52440.service: Deactivated successfully. Oct 13 05:43:19.815149 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:43:19.816655 systemd-logind[1609]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:43:19.817951 systemd-logind[1609]: Removed session 3. Oct 13 05:43:20.054632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:43:20.057205 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:43:20.059307 systemd[1]: Startup finished in 3.440s (kernel) + 7.750s (initrd) + 5.252s (userspace) = 16.443s. Oct 13 05:43:20.077270 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:43:20.521597 kubelet[1766]: E1013 05:43:20.521516 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:43:20.525953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:43:20.526181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:43:20.526601 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 264.8M memory peak. Oct 13 05:43:29.822285 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:57418.service - OpenSSH per-connection server daemon (10.0.0.1:57418). Oct 13 05:43:29.894434 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 57418 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:29.896188 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:29.901010 systemd-logind[1609]: New session 4 of user core. Oct 13 05:43:29.911066 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:43:29.965850 sshd[1783]: Connection closed by 10.0.0.1 port 57418 Oct 13 05:43:29.966178 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Oct 13 05:43:29.981479 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:57418.service: Deactivated successfully. Oct 13 05:43:29.983282 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:43:29.984065 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:43:29.986719 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:57434.service - OpenSSH per-connection server daemon (10.0.0.1:57434). Oct 13 05:43:29.987432 systemd-logind[1609]: Removed session 4. Oct 13 05:43:30.047033 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 57434 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:30.048422 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:30.053067 systemd-logind[1609]: New session 5 of user core. Oct 13 05:43:30.060050 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:43:30.109793 sshd[1792]: Connection closed by 10.0.0.1 port 57434 Oct 13 05:43:30.110153 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Oct 13 05:43:30.122631 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:57434.service: Deactivated successfully. Oct 13 05:43:30.124479 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:43:30.125245 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:43:30.127771 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:57450.service - OpenSSH per-connection server daemon (10.0.0.1:57450). Oct 13 05:43:30.128635 systemd-logind[1609]: Removed session 5. Oct 13 05:43:30.199221 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 57450 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:30.200790 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:30.205715 systemd-logind[1609]: New session 6 of user core. Oct 13 05:43:30.221205 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:43:30.275877 sshd[1801]: Connection closed by 10.0.0.1 port 57450 Oct 13 05:43:30.276219 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Oct 13 05:43:30.289589 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:57450.service: Deactivated successfully. Oct 13 05:43:30.291760 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:43:30.292687 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:43:30.295966 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:57454.service - OpenSSH per-connection server daemon (10.0.0.1:57454). Oct 13 05:43:30.296552 systemd-logind[1609]: Removed session 6. Oct 13 05:43:30.353247 sshd[1807]: Accepted publickey for core from 10.0.0.1 port 57454 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:30.355078 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:30.360024 systemd-logind[1609]: New session 7 of user core. Oct 13 05:43:30.370046 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:43:30.554438 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:43:30.554846 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:43:30.556031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:43:30.558792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:43:30.579677 sudo[1811]: pam_unix(sudo:session): session closed for user root Oct 13 05:43:30.581937 sshd[1810]: Connection closed by 10.0.0.1 port 57454 Oct 13 05:43:30.582604 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Oct 13 05:43:30.599446 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:57454.service: Deactivated successfully. Oct 13 05:43:30.601743 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:43:30.602629 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:43:30.605962 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:57464.service - OpenSSH per-connection server daemon (10.0.0.1:57464). Oct 13 05:43:30.606836 systemd-logind[1609]: Removed session 7. Oct 13 05:43:30.666235 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 57464 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:30.668330 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:30.674915 systemd-logind[1609]: New session 8 of user core. Oct 13 05:43:30.685178 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:43:30.743779 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:43:30.744206 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:43:30.854122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:43:30.878299 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:43:31.571148 kubelet[1832]: E1013 05:43:31.571075 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:43:31.579022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:43:31.579263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:43:31.579660 systemd[1]: kubelet.service: Consumed 267ms CPU time, 111.5M memory peak. Oct 13 05:43:32.003401 sudo[1825]: pam_unix(sudo:session): session closed for user root Oct 13 05:43:32.012420 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:43:32.012763 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:43:32.025132 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:43:32.071526 augenrules[1861]: No rules Oct 13 05:43:32.073648 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:43:32.074056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:43:32.075346 sudo[1824]: pam_unix(sudo:session): session closed for user root Oct 13 05:43:32.077429 sshd[1823]: Connection closed by 10.0.0.1 port 57464 Oct 13 05:43:32.077746 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Oct 13 05:43:32.088583 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:57464.service: Deactivated successfully. Oct 13 05:43:32.090443 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:43:32.091386 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:43:32.093901 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:57478.service - OpenSSH per-connection server daemon (10.0.0.1:57478). Oct 13 05:43:32.094603 systemd-logind[1609]: Removed session 8. Oct 13 05:43:32.150535 sshd[1870]: Accepted publickey for core from 10.0.0.1 port 57478 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:43:32.152187 sshd-session[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:43:32.157663 systemd-logind[1609]: New session 9 of user core. Oct 13 05:43:32.171167 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:43:32.228911 sudo[1874]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:43:32.229311 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:43:32.646862 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:43:32.661269 (dockerd)[1894]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:43:32.935635 dockerd[1894]: time="2025-10-13T05:43:32.935467583Z" level=info msg="Starting up" Oct 13 05:43:32.936445 dockerd[1894]: time="2025-10-13T05:43:32.936401305Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:43:32.950443 dockerd[1894]: time="2025-10-13T05:43:32.950382695Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:43:33.068139 dockerd[1894]: time="2025-10-13T05:43:33.068086245Z" level=info msg="Loading containers: start." Oct 13 05:43:33.078954 kernel: Initializing XFRM netlink socket Oct 13 05:43:33.361316 systemd-networkd[1533]: docker0: Link UP Oct 13 05:43:33.366344 dockerd[1894]: time="2025-10-13T05:43:33.366283014Z" level=info msg="Loading containers: done." Oct 13 05:43:33.380804 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1874489245-merged.mount: Deactivated successfully. Oct 13 05:43:33.382828 dockerd[1894]: time="2025-10-13T05:43:33.382782929Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:43:33.382897 dockerd[1894]: time="2025-10-13T05:43:33.382869301Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:43:33.382985 dockerd[1894]: time="2025-10-13T05:43:33.382971082Z" level=info msg="Initializing buildkit" Oct 13 05:43:33.412852 dockerd[1894]: time="2025-10-13T05:43:33.412796296Z" level=info msg="Completed buildkit initialization" Oct 13 05:43:33.419292 dockerd[1894]: time="2025-10-13T05:43:33.419257428Z" level=info msg="Daemon has completed initialization" Oct 13 05:43:33.419392 dockerd[1894]: time="2025-10-13T05:43:33.419344261Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:43:33.419525 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:43:34.117516 containerd[1633]: time="2025-10-13T05:43:34.117441391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 13 05:43:34.931364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603003541.mount: Deactivated successfully. Oct 13 05:43:36.108345 containerd[1633]: time="2025-10-13T05:43:36.108264317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:36.109186 containerd[1633]: time="2025-10-13T05:43:36.109125262Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 13 05:43:36.110421 containerd[1633]: time="2025-10-13T05:43:36.110378393Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:36.113593 containerd[1633]: time="2025-10-13T05:43:36.113541337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:36.114593 containerd[1633]: time="2025-10-13T05:43:36.114541172Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.997049177s" Oct 13 05:43:36.114593 containerd[1633]: time="2025-10-13T05:43:36.114579685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 13 05:43:36.115157 containerd[1633]: time="2025-10-13T05:43:36.115124316Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 13 05:43:39.377010 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1063378481 wd_nsec: 1063377903 Oct 13 05:43:39.967124 containerd[1633]: time="2025-10-13T05:43:39.967014628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:39.968001 containerd[1633]: time="2025-10-13T05:43:39.967974398Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 13 05:43:39.969500 containerd[1633]: time="2025-10-13T05:43:39.969455908Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:39.972286 containerd[1633]: time="2025-10-13T05:43:39.972226425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:39.973398 containerd[1633]: time="2025-10-13T05:43:39.973363478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 3.858213635s" Oct 13 05:43:39.973398 containerd[1633]: time="2025-10-13T05:43:39.973394246Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 13 05:43:39.973994 containerd[1633]: time="2025-10-13T05:43:39.973948286Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 13 05:43:41.751509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:43:41.753782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:43:41.981645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:43:42.014251 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:43:42.063946 kubelet[2185]: E1013 05:43:42.063856 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:43:42.068089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:43:42.068332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:43:42.068744 systemd[1]: kubelet.service: Consumed 241ms CPU time, 110.1M memory peak. Oct 13 05:43:43.985581 containerd[1633]: time="2025-10-13T05:43:43.985528887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:43.986550 containerd[1633]: time="2025-10-13T05:43:43.986514206Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 13 05:43:43.988014 containerd[1633]: time="2025-10-13T05:43:43.987981098Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:43.991170 containerd[1633]: time="2025-10-13T05:43:43.991111671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:43.992197 containerd[1633]: time="2025-10-13T05:43:43.992139038Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 4.018161418s" Oct 13 05:43:43.992197 containerd[1633]: time="2025-10-13T05:43:43.992174505Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 13 05:43:43.992701 containerd[1633]: time="2025-10-13T05:43:43.992606375Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 13 05:43:46.512544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168227587.mount: Deactivated successfully. Oct 13 05:43:47.361832 containerd[1633]: time="2025-10-13T05:43:47.361726755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:47.363780 containerd[1633]: time="2025-10-13T05:43:47.363706870Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 13 05:43:47.366572 containerd[1633]: time="2025-10-13T05:43:47.366504248Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:47.368893 containerd[1633]: time="2025-10-13T05:43:47.368806808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:47.369273 containerd[1633]: time="2025-10-13T05:43:47.369228219Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.376578943s" Oct 13 05:43:47.369273 containerd[1633]: time="2025-10-13T05:43:47.369259838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 13 05:43:47.369848 containerd[1633]: time="2025-10-13T05:43:47.369800653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 13 05:43:51.036995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072853290.mount: Deactivated successfully. Oct 13 05:43:52.205052 containerd[1633]: time="2025-10-13T05:43:52.204980093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:52.205884 containerd[1633]: time="2025-10-13T05:43:52.205817277Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 13 05:43:52.206868 containerd[1633]: time="2025-10-13T05:43:52.206824135Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:52.209724 containerd[1633]: time="2025-10-13T05:43:52.209634354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:52.210645 containerd[1633]: time="2025-10-13T05:43:52.210601327Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.840751371s" Oct 13 05:43:52.210645 containerd[1633]: time="2025-10-13T05:43:52.210638908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 13 05:43:52.211348 containerd[1633]: time="2025-10-13T05:43:52.211316115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:43:52.251261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 05:43:52.253196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:43:52.454359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:43:52.459533 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:43:53.335913 kubelet[2265]: E1013 05:43:53.335831 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:43:53.339876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:43:53.340180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:43:53.340694 systemd[1]: kubelet.service: Consumed 240ms CPU time, 111M memory peak. Oct 13 05:43:54.087635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount660206170.mount: Deactivated successfully. Oct 13 05:43:54.141944 containerd[1633]: time="2025-10-13T05:43:54.141872160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:43:54.142577 containerd[1633]: time="2025-10-13T05:43:54.142527341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:43:54.143698 containerd[1633]: time="2025-10-13T05:43:54.143666247Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:43:54.145672 containerd[1633]: time="2025-10-13T05:43:54.145634005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:43:54.146223 containerd[1633]: time="2025-10-13T05:43:54.146182101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.934834736s" Oct 13 05:43:54.146223 containerd[1633]: time="2025-10-13T05:43:54.146212179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:43:54.146677 containerd[1633]: time="2025-10-13T05:43:54.146648102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 13 05:43:55.219328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929364102.mount: Deactivated successfully. Oct 13 05:43:57.909659 containerd[1633]: time="2025-10-13T05:43:57.909595838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:57.910563 containerd[1633]: time="2025-10-13T05:43:57.910531349Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 13 05:43:57.911943 containerd[1633]: time="2025-10-13T05:43:57.911885077Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:57.914976 containerd[1633]: time="2025-10-13T05:43:57.914915587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:43:57.916200 containerd[1633]: time="2025-10-13T05:43:57.916119219Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.769438806s" Oct 13 05:43:57.916200 containerd[1633]: time="2025-10-13T05:43:57.916188161Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 13 05:43:59.913855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:43:59.914043 systemd[1]: kubelet.service: Consumed 240ms CPU time, 111M memory peak. Oct 13 05:43:59.916445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:43:59.945115 systemd[1]: Reload requested from client PID 2362 ('systemctl') (unit session-9.scope)... Oct 13 05:43:59.945144 systemd[1]: Reloading... Oct 13 05:44:00.145959 zram_generator::config[2405]: No configuration found. Oct 13 05:44:01.171553 systemd[1]: Reloading finished in 1226 ms. Oct 13 05:44:01.245750 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:44:01.245868 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:44:01.246195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:44:01.246238 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.4M memory peak. Oct 13 05:44:01.247910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:44:01.434022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:44:01.448249 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:44:01.517573 kubelet[2453]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:44:01.517573 kubelet[2453]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:44:01.517573 kubelet[2453]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:44:01.518029 kubelet[2453]: I1013 05:44:01.517618 2453 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:44:01.772963 kubelet[2453]: I1013 05:44:01.772898 2453 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 05:44:01.772963 kubelet[2453]: I1013 05:44:01.772949 2453 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:44:01.773288 kubelet[2453]: I1013 05:44:01.773266 2453 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 05:44:01.797740 kubelet[2453]: I1013 05:44:01.797689 2453 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:44:01.798097 kubelet[2453]: E1013 05:44:01.798025 2453 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:01.807780 kubelet[2453]: I1013 05:44:01.807734 2453 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:44:01.814175 kubelet[2453]: I1013 05:44:01.814108 2453 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:44:01.815906 kubelet[2453]: I1013 05:44:01.815849 2453 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:44:01.816115 kubelet[2453]: I1013 05:44:01.815885 2453 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:44:01.816115 kubelet[2453]: I1013 05:44:01.816108 2453 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:44:01.816115 kubelet[2453]: I1013 05:44:01.816117 2453 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 05:44:01.816443 kubelet[2453]: I1013 05:44:01.816296 2453 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:44:01.818742 kubelet[2453]: I1013 05:44:01.818696 2453 kubelet.go:446] "Attempting to sync node with API server" Oct 13 05:44:01.818742 kubelet[2453]: I1013 05:44:01.818725 2453 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:44:01.818742 kubelet[2453]: I1013 05:44:01.818746 2453 kubelet.go:352] "Adding apiserver pod source" Oct 13 05:44:01.818875 kubelet[2453]: I1013 05:44:01.818759 2453 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:44:01.821061 kubelet[2453]: I1013 05:44:01.820969 2453 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:44:01.821415 kubelet[2453]: I1013 05:44:01.821352 2453 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 05:44:01.822130 kubelet[2453]: W1013 05:44:01.822075 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:01.822130 kubelet[2453]: E1013 05:44:01.822125 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:01.822444 kubelet[2453]: W1013 05:44:01.822398 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:01.822444 kubelet[2453]: E1013 05:44:01.822432 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:01.822661 kubelet[2453]: W1013 05:44:01.822631 2453 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:44:01.825395 kubelet[2453]: I1013 05:44:01.825355 2453 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:44:01.825395 kubelet[2453]: I1013 05:44:01.825389 2453 server.go:1287] "Started kubelet" Oct 13 05:44:01.825632 kubelet[2453]: I1013 05:44:01.825584 2453 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:44:01.826490 kubelet[2453]: I1013 05:44:01.826440 2453 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:44:01.827903 kubelet[2453]: I1013 05:44:01.826760 2453 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:44:01.827903 kubelet[2453]: I1013 05:44:01.827232 2453 server.go:479] "Adding debug handlers to kubelet server" Oct 13 05:44:01.827903 kubelet[2453]: I1013 05:44:01.827555 2453 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:44:01.827903 kubelet[2453]: I1013 05:44:01.827572 2453 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:44:01.827903 kubelet[2453]: I1013 05:44:01.827802 2453 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:44:01.830328 kubelet[2453]: I1013 05:44:01.830285 2453 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:44:01.830402 kubelet[2453]: I1013 05:44:01.830361 2453 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:44:01.831985 kubelet[2453]: W1013 05:44:01.831364 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:01.831985 kubelet[2453]: E1013 05:44:01.831425 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:01.832834 kubelet[2453]: E1013 05:44:01.832226 2453 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:44:01.833178 kubelet[2453]: E1013 05:44:01.831684 2453 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df6aa91f535e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:44:01.825371623 +0000 UTC m=+0.373404424,LastTimestamp:2025-10-13 05:44:01.825371623 +0000 UTC m=+0.373404424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:44:01.833351 kubelet[2453]: E1013 05:44:01.833333 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:01.834012 kubelet[2453]: E1013 05:44:01.833954 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Oct 13 05:44:01.834321 kubelet[2453]: I1013 05:44:01.834292 2453 factory.go:221] Registration of the systemd container factory successfully Oct 13 05:44:01.834439 kubelet[2453]: I1013 05:44:01.834409 2453 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:44:01.836504 kubelet[2453]: I1013 05:44:01.836467 2453 factory.go:221] Registration of the containerd container factory successfully Oct 13 05:44:01.853901 kubelet[2453]: I1013 05:44:01.853860 2453 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:44:01.853901 kubelet[2453]: I1013 05:44:01.853881 2453 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:44:01.853901 kubelet[2453]: I1013 05:44:01.853897 2453 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:44:01.854357 kubelet[2453]: I1013 05:44:01.854306 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 05:44:01.856182 kubelet[2453]: I1013 05:44:01.856148 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 05:44:01.856247 kubelet[2453]: I1013 05:44:01.856194 2453 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 05:44:01.856247 kubelet[2453]: I1013 05:44:01.856231 2453 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:44:01.856313 kubelet[2453]: I1013 05:44:01.856248 2453 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 05:44:01.856382 kubelet[2453]: E1013 05:44:01.856330 2453 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:44:01.857002 kubelet[2453]: W1013 05:44:01.856863 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:01.857002 kubelet[2453]: E1013 05:44:01.856902 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:01.933756 kubelet[2453]: E1013 05:44:01.933643 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:01.956900 kubelet[2453]: E1013 05:44:01.956863 2453 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:44:02.034577 kubelet[2453]: E1013 05:44:02.034411 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.034856 kubelet[2453]: E1013 05:44:02.034792 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Oct 13 05:44:02.135521 kubelet[2453]: E1013 05:44:02.135458 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.157782 kubelet[2453]: E1013 05:44:02.157702 2453 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:44:02.236677 kubelet[2453]: E1013 05:44:02.236609 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.337118 kubelet[2453]: E1013 05:44:02.336910 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.436264 kubelet[2453]: E1013 05:44:02.436196 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Oct 13 05:44:02.437162 kubelet[2453]: E1013 05:44:02.437137 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.538038 kubelet[2453]: E1013 05:44:02.537953 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.558219 kubelet[2453]: E1013 05:44:02.558148 2453 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:44:02.639118 kubelet[2453]: E1013 05:44:02.638972 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.739892 kubelet[2453]: E1013 05:44:02.739830 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.840749 kubelet[2453]: E1013 05:44:02.840687 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:02.846322 kubelet[2453]: W1013 05:44:02.846265 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:02.846394 kubelet[2453]: E1013 05:44:02.846324 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:02.864782 kubelet[2453]: W1013 05:44:02.864758 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:02.864845 kubelet[2453]: E1013 05:44:02.864787 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:02.923702 kubelet[2453]: W1013 05:44:02.923560 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:02.923702 kubelet[2453]: E1013 05:44:02.923616 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:02.941582 kubelet[2453]: E1013 05:44:02.941494 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:03.042504 kubelet[2453]: E1013 05:44:03.042423 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:03.143291 kubelet[2453]: E1013 05:44:03.143217 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:03.163349 kubelet[2453]: I1013 05:44:03.163307 2453 policy_none.go:49] "None policy: Start" Oct 13 05:44:03.163349 kubelet[2453]: I1013 05:44:03.163337 2453 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:44:03.163349 kubelet[2453]: I1013 05:44:03.163357 2453 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:44:03.230363 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:44:03.237465 kubelet[2453]: E1013 05:44:03.237427 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Oct 13 05:44:03.243357 kubelet[2453]: E1013 05:44:03.243330 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:03.247602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:44:03.251461 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:44:03.259155 kubelet[2453]: I1013 05:44:03.259112 2453 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 05:44:03.259376 kubelet[2453]: I1013 05:44:03.259357 2453 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:44:03.259440 kubelet[2453]: I1013 05:44:03.259371 2453 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:44:03.259869 kubelet[2453]: I1013 05:44:03.259590 2453 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:44:03.260333 kubelet[2453]: E1013 05:44:03.260315 2453 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:44:03.260482 kubelet[2453]: E1013 05:44:03.260448 2453 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:44:03.360287 kubelet[2453]: I1013 05:44:03.360228 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:44:03.360636 kubelet[2453]: E1013 05:44:03.360608 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Oct 13 05:44:03.364677 kubelet[2453]: W1013 05:44:03.364623 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Oct 13 05:44:03.364785 kubelet[2453]: E1013 05:44:03.364684 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:03.366714 systemd[1]: Created slice kubepods-burstable-pod4baf389f72c30b53ecbae221d8b25b00.slice - libcontainer container kubepods-burstable-pod4baf389f72c30b53ecbae221d8b25b00.slice. Oct 13 05:44:03.391854 kubelet[2453]: E1013 05:44:03.391794 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:44:03.395079 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 13 05:44:03.411505 kubelet[2453]: E1013 05:44:03.411444 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:44:03.414849 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 13 05:44:03.416864 kubelet[2453]: E1013 05:44:03.416840 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:44:03.441399 kubelet[2453]: I1013 05:44:03.441355 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4baf389f72c30b53ecbae221d8b25b00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4baf389f72c30b53ecbae221d8b25b00\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:03.441399 kubelet[2453]: I1013 05:44:03.441390 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:03.441399 kubelet[2453]: I1013 05:44:03.441410 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:03.441604 kubelet[2453]: I1013 05:44:03.441429 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:03.441604 kubelet[2453]: I1013 05:44:03.441459 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:44:03.441604 kubelet[2453]: I1013 05:44:03.441485 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4baf389f72c30b53ecbae221d8b25b00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4baf389f72c30b53ecbae221d8b25b00\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:03.441604 kubelet[2453]: I1013 05:44:03.441509 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4baf389f72c30b53ecbae221d8b25b00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4baf389f72c30b53ecbae221d8b25b00\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:03.441604 kubelet[2453]: I1013 05:44:03.441530 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:03.441742 kubelet[2453]: I1013 05:44:03.441550 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:03.562693 kubelet[2453]: I1013 05:44:03.562643 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:44:03.563219 kubelet[2453]: E1013 05:44:03.563011 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Oct 13 05:44:03.616442 update_engine[1610]: I20251013 05:44:03.616362 1610 update_attempter.cc:509] Updating boot flags... Oct 13 05:44:03.693484 kubelet[2453]: E1013 05:44:03.693440 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:03.694507 containerd[1633]: time="2025-10-13T05:44:03.694458051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4baf389f72c30b53ecbae221d8b25b00,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:03.713024 kubelet[2453]: E1013 05:44:03.712582 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:03.713145 containerd[1633]: time="2025-10-13T05:44:03.713117662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:03.717715 kubelet[2453]: E1013 05:44:03.717676 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:03.718086 containerd[1633]: time="2025-10-13T05:44:03.718055269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:03.922800 kubelet[2453]: E1013 05:44:03.922663 2453 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:44:03.933730 containerd[1633]: time="2025-10-13T05:44:03.933666493Z" level=info msg="connecting to shim 797a6c8ed1de506a04b045c7342d2cc4e5f5215cfa31c3738519803d09580866" address="unix:///run/containerd/s/6cabdc4b6263aac348b0c146d84fc433ce99e4041740933a51573e254f138578" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:03.954550 containerd[1633]: time="2025-10-13T05:44:03.954216406Z" level=info msg="connecting to shim 2cc0711a8d3d830932488d48ced307fdfb46ba825ef7ad88eacbf2dcbab46e40" address="unix:///run/containerd/s/de54740d101ade28b21142daed20018f8bb7ec4d518b1fbca351e1a2d0d078e3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:03.962002 containerd[1633]: time="2025-10-13T05:44:03.961945153Z" level=info msg="connecting to shim d843c404cad07671aaa5415d3a4ae76829af97f7e13643c6e49abe108780e681" address="unix:///run/containerd/s/89a78e50620d2506ccb3e0d9248ce16fbb8277fc485b8303b7e6183c5cd9f967" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:03.966229 kubelet[2453]: I1013 05:44:03.966194 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:44:03.966576 kubelet[2453]: E1013 05:44:03.966545 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Oct 13 05:44:03.987174 systemd[1]: Started cri-containerd-2cc0711a8d3d830932488d48ced307fdfb46ba825ef7ad88eacbf2dcbab46e40.scope - libcontainer container 2cc0711a8d3d830932488d48ced307fdfb46ba825ef7ad88eacbf2dcbab46e40. Oct 13 05:44:03.989528 systemd[1]: Started cri-containerd-797a6c8ed1de506a04b045c7342d2cc4e5f5215cfa31c3738519803d09580866.scope - libcontainer container 797a6c8ed1de506a04b045c7342d2cc4e5f5215cfa31c3738519803d09580866. Oct 13 05:44:04.012062 systemd[1]: Started cri-containerd-d843c404cad07671aaa5415d3a4ae76829af97f7e13643c6e49abe108780e681.scope - libcontainer container d843c404cad07671aaa5415d3a4ae76829af97f7e13643c6e49abe108780e681. Oct 13 05:44:04.114137 containerd[1633]: time="2025-10-13T05:44:04.114068085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cc0711a8d3d830932488d48ced307fdfb46ba825ef7ad88eacbf2dcbab46e40\"" Oct 13 05:44:04.115013 kubelet[2453]: E1013 05:44:04.114988 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:04.116375 containerd[1633]: time="2025-10-13T05:44:04.116342543Z" level=info msg="CreateContainer within sandbox \"2cc0711a8d3d830932488d48ced307fdfb46ba825ef7ad88eacbf2dcbab46e40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:44:04.117988 containerd[1633]: time="2025-10-13T05:44:04.117954766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4baf389f72c30b53ecbae221d8b25b00,Namespace:kube-system,Attempt:0,} returns sandbox id \"797a6c8ed1de506a04b045c7342d2cc4e5f5215cfa31c3738519803d09580866\"" Oct 13 05:44:04.118379 kubelet[2453]: E1013 05:44:04.118361 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:04.119507 containerd[1633]: time="2025-10-13T05:44:04.119477670Z" level=info msg="CreateContainer within sandbox \"797a6c8ed1de506a04b045c7342d2cc4e5f5215cfa31c3738519803d09580866\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:44:04.119575 containerd[1633]: time="2025-10-13T05:44:04.119497448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"d843c404cad07671aaa5415d3a4ae76829af97f7e13643c6e49abe108780e681\"" Oct 13 05:44:04.119993 kubelet[2453]: E1013 05:44:04.119971 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:04.121106 containerd[1633]: time="2025-10-13T05:44:04.121085596Z" level=info msg="CreateContainer within sandbox \"d843c404cad07671aaa5415d3a4ae76829af97f7e13643c6e49abe108780e681\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:44:04.139822 containerd[1633]: time="2025-10-13T05:44:04.139773508Z" level=info msg="Container 9d3084d9c524e63a296d06c9f0452651f2a42d02d213653fd1b5eaee620ef6cd: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:04.143533 containerd[1633]: time="2025-10-13T05:44:04.143505817Z" level=info msg="Container 5715975edfe2aced892e5494c8130779b5176ac457b92ecdbffb57ccb22926f5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:04.144301 containerd[1633]: time="2025-10-13T05:44:04.144272510Z" level=info msg="Container b42c3103206fb3b3db034c50e17dec4dfc531fa681ea40b670b56c2eee58e61f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:04.152295 containerd[1633]: time="2025-10-13T05:44:04.152260066Z" level=info msg="CreateContainer within sandbox \"797a6c8ed1de506a04b045c7342d2cc4e5f5215cfa31c3738519803d09580866\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5715975edfe2aced892e5494c8130779b5176ac457b92ecdbffb57ccb22926f5\"" Oct 13 05:44:04.153111 containerd[1633]: time="2025-10-13T05:44:04.153081232Z" level=info msg="CreateContainer within sandbox \"2cc0711a8d3d830932488d48ced307fdfb46ba825ef7ad88eacbf2dcbab46e40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d3084d9c524e63a296d06c9f0452651f2a42d02d213653fd1b5eaee620ef6cd\"" Oct 13 05:44:04.153239 containerd[1633]: time="2025-10-13T05:44:04.153219574Z" level=info msg="StartContainer for \"5715975edfe2aced892e5494c8130779b5176ac457b92ecdbffb57ccb22926f5\"" Oct 13 05:44:04.154089 containerd[1633]: time="2025-10-13T05:44:04.154063302Z" level=info msg="StartContainer for \"9d3084d9c524e63a296d06c9f0452651f2a42d02d213653fd1b5eaee620ef6cd\"" Oct 13 05:44:04.154177 containerd[1633]: time="2025-10-13T05:44:04.154156919Z" level=info msg="connecting to shim 5715975edfe2aced892e5494c8130779b5176ac457b92ecdbffb57ccb22926f5" address="unix:///run/containerd/s/6cabdc4b6263aac348b0c146d84fc433ce99e4041740933a51573e254f138578" protocol=ttrpc version=3 Oct 13 05:44:04.155129 containerd[1633]: time="2025-10-13T05:44:04.155105997Z" level=info msg="connecting to shim 9d3084d9c524e63a296d06c9f0452651f2a42d02d213653fd1b5eaee620ef6cd" address="unix:///run/containerd/s/de54740d101ade28b21142daed20018f8bb7ec4d518b1fbca351e1a2d0d078e3" protocol=ttrpc version=3 Oct 13 05:44:04.156552 containerd[1633]: time="2025-10-13T05:44:04.156521898Z" level=info msg="CreateContainer within sandbox \"d843c404cad07671aaa5415d3a4ae76829af97f7e13643c6e49abe108780e681\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b42c3103206fb3b3db034c50e17dec4dfc531fa681ea40b670b56c2eee58e61f\"" Oct 13 05:44:04.156852 containerd[1633]: time="2025-10-13T05:44:04.156821265Z" level=info msg="StartContainer for \"b42c3103206fb3b3db034c50e17dec4dfc531fa681ea40b670b56c2eee58e61f\"" Oct 13 05:44:04.157809 containerd[1633]: time="2025-10-13T05:44:04.157777887Z" level=info msg="connecting to shim b42c3103206fb3b3db034c50e17dec4dfc531fa681ea40b670b56c2eee58e61f" address="unix:///run/containerd/s/89a78e50620d2506ccb3e0d9248ce16fbb8277fc485b8303b7e6183c5cd9f967" protocol=ttrpc version=3 Oct 13 05:44:04.181068 systemd[1]: Started cri-containerd-5715975edfe2aced892e5494c8130779b5176ac457b92ecdbffb57ccb22926f5.scope - libcontainer container 5715975edfe2aced892e5494c8130779b5176ac457b92ecdbffb57ccb22926f5. Oct 13 05:44:04.182661 systemd[1]: Started cri-containerd-9d3084d9c524e63a296d06c9f0452651f2a42d02d213653fd1b5eaee620ef6cd.scope - libcontainer container 9d3084d9c524e63a296d06c9f0452651f2a42d02d213653fd1b5eaee620ef6cd. Oct 13 05:44:04.187272 systemd[1]: Started cri-containerd-b42c3103206fb3b3db034c50e17dec4dfc531fa681ea40b670b56c2eee58e61f.scope - libcontainer container b42c3103206fb3b3db034c50e17dec4dfc531fa681ea40b670b56c2eee58e61f. Oct 13 05:44:04.768702 kubelet[2453]: I1013 05:44:04.768402 2453 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:44:05.049750 containerd[1633]: time="2025-10-13T05:44:05.047751516Z" level=info msg="StartContainer for \"b42c3103206fb3b3db034c50e17dec4dfc531fa681ea40b670b56c2eee58e61f\" returns successfully" Oct 13 05:44:05.049750 containerd[1633]: time="2025-10-13T05:44:05.049338671Z" level=info msg="StartContainer for \"9d3084d9c524e63a296d06c9f0452651f2a42d02d213653fd1b5eaee620ef6cd\" returns successfully" Oct 13 05:44:05.050726 containerd[1633]: time="2025-10-13T05:44:05.050505168Z" level=info msg="StartContainer for \"5715975edfe2aced892e5494c8130779b5176ac457b92ecdbffb57ccb22926f5\" returns successfully" Oct 13 05:44:05.058836 kubelet[2453]: E1013 05:44:05.058774 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:44:05.058984 kubelet[2453]: E1013 05:44:05.058960 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:05.059416 kubelet[2453]: E1013 05:44:05.059279 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:44:05.059496 kubelet[2453]: E1013 05:44:05.059483 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:05.958666 kubelet[2453]: E1013 05:44:05.958611 2453 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:44:06.015862 kubelet[2453]: I1013 05:44:06.015816 2453 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:44:06.033764 kubelet[2453]: I1013 05:44:06.033701 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:06.039326 kubelet[2453]: E1013 05:44:06.039271 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:06.039326 kubelet[2453]: I1013 05:44:06.039318 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:06.042656 kubelet[2453]: E1013 05:44:06.042323 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:06.042656 kubelet[2453]: I1013 05:44:06.042360 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:44:06.044985 kubelet[2453]: E1013 05:44:06.044905 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:44:06.059639 kubelet[2453]: I1013 05:44:06.059611 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:44:06.059713 kubelet[2453]: I1013 05:44:06.059661 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:06.060000 kubelet[2453]: I1013 05:44:06.059973 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:06.061390 kubelet[2453]: E1013 05:44:06.061336 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:44:06.061655 kubelet[2453]: E1013 05:44:06.061624 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:06.063637 kubelet[2453]: E1013 05:44:06.063299 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:06.063637 kubelet[2453]: E1013 05:44:06.063568 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:06.063637 kubelet[2453]: E1013 05:44:06.063608 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:06.063971 kubelet[2453]: E1013 05:44:06.063800 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:06.822621 kubelet[2453]: I1013 05:44:06.822568 2453 apiserver.go:52] "Watching apiserver" Oct 13 05:44:06.830659 kubelet[2453]: I1013 05:44:06.830607 2453 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:44:07.883336 systemd[1]: Reload requested from client PID 2739 ('systemctl') (unit session-9.scope)... Oct 13 05:44:07.883353 systemd[1]: Reloading... Oct 13 05:44:07.959950 zram_generator::config[2783]: No configuration found. Oct 13 05:44:08.219845 systemd[1]: Reloading finished in 336 ms. Oct 13 05:44:08.246661 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:44:08.272351 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:44:08.272791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:44:08.272850 systemd[1]: kubelet.service: Consumed 905ms CPU time, 132.3M memory peak. Oct 13 05:44:08.275064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:44:08.497991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:44:08.524235 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:44:08.569297 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:44:08.569297 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:44:08.569297 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:44:08.569734 kubelet[2828]: I1013 05:44:08.569383 2828 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:44:08.577787 kubelet[2828]: I1013 05:44:08.577734 2828 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 05:44:08.577787 kubelet[2828]: I1013 05:44:08.577757 2828 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:44:08.578010 kubelet[2828]: I1013 05:44:08.577985 2828 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 05:44:08.579210 kubelet[2828]: I1013 05:44:08.579180 2828 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 05:44:08.582305 kubelet[2828]: I1013 05:44:08.582275 2828 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:44:08.586333 kubelet[2828]: I1013 05:44:08.586311 2828 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:44:08.591862 kubelet[2828]: I1013 05:44:08.591826 2828 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:44:08.592099 kubelet[2828]: I1013 05:44:08.592062 2828 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:44:08.592243 kubelet[2828]: I1013 05:44:08.592087 2828 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:44:08.592369 kubelet[2828]: I1013 05:44:08.592250 2828 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:44:08.592369 kubelet[2828]: I1013 05:44:08.592259 2828 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 05:44:08.592369 kubelet[2828]: I1013 05:44:08.592308 2828 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:44:08.592471 kubelet[2828]: I1013 05:44:08.592449 2828 kubelet.go:446] "Attempting to sync node with API server" Oct 13 05:44:08.592529 kubelet[2828]: I1013 05:44:08.592499 2828 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:44:08.592529 kubelet[2828]: I1013 05:44:08.592527 2828 kubelet.go:352] "Adding apiserver pod source" Oct 13 05:44:08.592604 kubelet[2828]: I1013 05:44:08.592537 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:44:08.593564 kubelet[2828]: I1013 05:44:08.593458 2828 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:44:08.595347 kubelet[2828]: I1013 05:44:08.595309 2828 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 05:44:08.595893 kubelet[2828]: I1013 05:44:08.595779 2828 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:44:08.595893 kubelet[2828]: I1013 05:44:08.595818 2828 server.go:1287] "Started kubelet" Oct 13 05:44:08.597292 kubelet[2828]: I1013 05:44:08.597229 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:44:08.597529 kubelet[2828]: I1013 05:44:08.597469 2828 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:44:08.597663 kubelet[2828]: I1013 05:44:08.597639 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:44:08.597981 kubelet[2828]: I1013 05:44:08.597966 2828 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:44:08.598943 kubelet[2828]: I1013 05:44:08.598663 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:44:08.600554 kubelet[2828]: I1013 05:44:08.598667 2828 server.go:479] "Adding debug handlers to kubelet server" Oct 13 05:44:08.604153 kubelet[2828]: E1013 05:44:08.603992 2828 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:44:08.604383 kubelet[2828]: E1013 05:44:08.604368 2828 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:44:08.604461 kubelet[2828]: I1013 05:44:08.604451 2828 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:44:08.604664 kubelet[2828]: I1013 05:44:08.604649 2828 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:44:08.604877 kubelet[2828]: I1013 05:44:08.604861 2828 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:44:08.605512 kubelet[2828]: I1013 05:44:08.605462 2828 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:44:08.607490 kubelet[2828]: I1013 05:44:08.606521 2828 factory.go:221] Registration of the containerd container factory successfully Oct 13 05:44:08.607490 kubelet[2828]: I1013 05:44:08.606541 2828 factory.go:221] Registration of the systemd container factory successfully Oct 13 05:44:08.614050 kubelet[2828]: I1013 05:44:08.614005 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 05:44:08.617560 kubelet[2828]: I1013 05:44:08.616446 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 05:44:08.617560 kubelet[2828]: I1013 05:44:08.616511 2828 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 05:44:08.617560 kubelet[2828]: I1013 05:44:08.616628 2828 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:44:08.617560 kubelet[2828]: I1013 05:44:08.616636 2828 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 05:44:08.617560 kubelet[2828]: E1013 05:44:08.616684 2828 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:44:08.640052 kubelet[2828]: I1013 05:44:08.640018 2828 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:44:08.640052 kubelet[2828]: I1013 05:44:08.640040 2828 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:44:08.640052 kubelet[2828]: I1013 05:44:08.640064 2828 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:44:08.640276 kubelet[2828]: I1013 05:44:08.640258 2828 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:44:08.640300 kubelet[2828]: I1013 05:44:08.640274 2828 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:44:08.640300 kubelet[2828]: I1013 05:44:08.640295 2828 policy_none.go:49] "None policy: Start" Oct 13 05:44:08.640358 kubelet[2828]: I1013 05:44:08.640308 2828 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:44:08.640358 kubelet[2828]: I1013 05:44:08.640320 2828 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:44:08.640429 kubelet[2828]: I1013 05:44:08.640414 2828 state_mem.go:75] "Updated machine memory state" Oct 13 05:44:08.644854 kubelet[2828]: I1013 05:44:08.644707 2828 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 05:44:08.644985 kubelet[2828]: I1013 05:44:08.644954 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:44:08.645022 kubelet[2828]: I1013 05:44:08.644974 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:44:08.645261 kubelet[2828]: I1013 05:44:08.645226 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:44:08.646803 kubelet[2828]: E1013 05:44:08.646496 2828 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:44:08.717940 kubelet[2828]: I1013 05:44:08.717885 2828 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:44:08.717940 kubelet[2828]: I1013 05:44:08.717915 2828 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:08.718149 kubelet[2828]: I1013 05:44:08.717960 2828 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:08.749664 kubelet[2828]: I1013 05:44:08.749552 2828 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:44:08.873440 kubelet[2828]: I1013 05:44:08.873377 2828 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:44:08.873590 kubelet[2828]: I1013 05:44:08.873478 2828 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:44:08.905972 kubelet[2828]: I1013 05:44:08.905900 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:08.905972 kubelet[2828]: I1013 05:44:08.905976 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:08.906187 kubelet[2828]: I1013 05:44:08.906023 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4baf389f72c30b53ecbae221d8b25b00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4baf389f72c30b53ecbae221d8b25b00\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:08.906187 kubelet[2828]: I1013 05:44:08.906047 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:08.906187 kubelet[2828]: I1013 05:44:08.906070 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:08.906187 kubelet[2828]: I1013 05:44:08.906089 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:44:08.906187 kubelet[2828]: I1013 05:44:08.906108 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4baf389f72c30b53ecbae221d8b25b00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4baf389f72c30b53ecbae221d8b25b00\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:08.906350 kubelet[2828]: I1013 05:44:08.906126 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4baf389f72c30b53ecbae221d8b25b00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4baf389f72c30b53ecbae221d8b25b00\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:44:08.906350 kubelet[2828]: I1013 05:44:08.906201 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:44:09.171679 kubelet[2828]: E1013 05:44:09.171636 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:09.171679 kubelet[2828]: E1013 05:44:09.171713 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:09.171941 kubelet[2828]: E1013 05:44:09.171734 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:09.593197 kubelet[2828]: I1013 05:44:09.593147 2828 apiserver.go:52] "Watching apiserver" Oct 13 05:44:09.605465 kubelet[2828]: I1013 05:44:09.605428 2828 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:44:09.627700 kubelet[2828]: E1013 05:44:09.627678 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:09.627797 kubelet[2828]: E1013 05:44:09.627679 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:09.627883 kubelet[2828]: E1013 05:44:09.627869 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:09.803072 kubelet[2828]: I1013 05:44:09.802866 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.802853118 podStartE2EDuration="1.802853118s" podCreationTimestamp="2025-10-13 05:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:44:09.802646047 +0000 UTC m=+1.268938308" watchObservedRunningTime="2025-10-13 05:44:09.802853118 +0000 UTC m=+1.269145379" Oct 13 05:44:10.195895 kubelet[2828]: I1013 05:44:10.195401 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.194477804 podStartE2EDuration="2.194477804s" podCreationTimestamp="2025-10-13 05:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:44:09.840199775 +0000 UTC m=+1.306492036" watchObservedRunningTime="2025-10-13 05:44:10.194477804 +0000 UTC m=+1.660770065" Oct 13 05:44:10.212371 kubelet[2828]: I1013 05:44:10.212280 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.212262314 podStartE2EDuration="2.212262314s" podCreationTimestamp="2025-10-13 05:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:44:10.196769581 +0000 UTC m=+1.663061862" watchObservedRunningTime="2025-10-13 05:44:10.212262314 +0000 UTC m=+1.678554575" Oct 13 05:44:10.628853 kubelet[2828]: E1013 05:44:10.628751 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:10.629471 kubelet[2828]: E1013 05:44:10.628886 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:12.091660 kubelet[2828]: E1013 05:44:12.091610 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:14.622173 kubelet[2828]: E1013 05:44:14.622125 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:14.634361 kubelet[2828]: E1013 05:44:14.634323 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:15.384584 systemd[1]: Created slice kubepods-besteffort-pod6a497d47_9e32_41a2_bf9e_d53eadd7da91.slice - libcontainer container kubepods-besteffort-pod6a497d47_9e32_41a2_bf9e_d53eadd7da91.slice. Oct 13 05:44:15.433389 kubelet[2828]: I1013 05:44:15.433350 2828 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:44:15.433837 containerd[1633]: time="2025-10-13T05:44:15.433782706Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:44:15.434258 kubelet[2828]: I1013 05:44:15.433967 2828 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:44:15.449268 kubelet[2828]: I1013 05:44:15.449207 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrf7\" (UniqueName: \"kubernetes.io/projected/6a497d47-9e32-41a2-bf9e-d53eadd7da91-kube-api-access-dmrf7\") pod \"kube-proxy-942sf\" (UID: \"6a497d47-9e32-41a2-bf9e-d53eadd7da91\") " pod="kube-system/kube-proxy-942sf" Oct 13 05:44:15.449268 kubelet[2828]: I1013 05:44:15.449250 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a497d47-9e32-41a2-bf9e-d53eadd7da91-lib-modules\") pod \"kube-proxy-942sf\" (UID: \"6a497d47-9e32-41a2-bf9e-d53eadd7da91\") " pod="kube-system/kube-proxy-942sf" Oct 13 05:44:15.449367 kubelet[2828]: I1013 05:44:15.449274 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a497d47-9e32-41a2-bf9e-d53eadd7da91-kube-proxy\") pod \"kube-proxy-942sf\" (UID: \"6a497d47-9e32-41a2-bf9e-d53eadd7da91\") " pod="kube-system/kube-proxy-942sf" Oct 13 05:44:15.449367 kubelet[2828]: I1013 05:44:15.449327 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a497d47-9e32-41a2-bf9e-d53eadd7da91-xtables-lock\") pod \"kube-proxy-942sf\" (UID: \"6a497d47-9e32-41a2-bf9e-d53eadd7da91\") " pod="kube-system/kube-proxy-942sf" Oct 13 05:44:15.636778 kubelet[2828]: E1013 05:44:15.636623 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:15.694960 kubelet[2828]: E1013 05:44:15.694897 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:15.695557 containerd[1633]: time="2025-10-13T05:44:15.695506473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-942sf,Uid:6a497d47-9e32-41a2-bf9e-d53eadd7da91,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:16.328967 containerd[1633]: time="2025-10-13T05:44:16.326702707Z" level=info msg="connecting to shim 76294742eaa69c7f777eed4dfe367fb34c7d85221721d659db1b4e9732b14b0a" address="unix:///run/containerd/s/da02adf82eaecead74a7bf06fac1b32e277c4928028acfc6836c8ba6ec6d09bf" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:16.379077 systemd[1]: Started cri-containerd-76294742eaa69c7f777eed4dfe367fb34c7d85221721d659db1b4e9732b14b0a.scope - libcontainer container 76294742eaa69c7f777eed4dfe367fb34c7d85221721d659db1b4e9732b14b0a. Oct 13 05:44:16.722442 containerd[1633]: time="2025-10-13T05:44:16.722290380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-942sf,Uid:6a497d47-9e32-41a2-bf9e-d53eadd7da91,Namespace:kube-system,Attempt:0,} returns sandbox id \"76294742eaa69c7f777eed4dfe367fb34c7d85221721d659db1b4e9732b14b0a\"" Oct 13 05:44:16.723211 kubelet[2828]: E1013 05:44:16.723168 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:16.725961 containerd[1633]: time="2025-10-13T05:44:16.725878550Z" level=info msg="CreateContainer within sandbox \"76294742eaa69c7f777eed4dfe367fb34c7d85221721d659db1b4e9732b14b0a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:44:16.860831 containerd[1633]: time="2025-10-13T05:44:16.860204108Z" level=info msg="Container 0dc54b4cc9ca5fe59ec78b6ae642355ef0676891735e166c6f9998dff830fc9f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:16.875604 containerd[1633]: time="2025-10-13T05:44:16.875541959Z" level=info msg="CreateContainer within sandbox \"76294742eaa69c7f777eed4dfe367fb34c7d85221721d659db1b4e9732b14b0a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0dc54b4cc9ca5fe59ec78b6ae642355ef0676891735e166c6f9998dff830fc9f\"" Oct 13 05:44:16.877656 systemd[1]: Created slice kubepods-besteffort-pod352c0a25_c481_41bc_99bb_1ac737a9b0ff.slice - libcontainer container kubepods-besteffort-pod352c0a25_c481_41bc_99bb_1ac737a9b0ff.slice. Oct 13 05:44:16.879053 containerd[1633]: time="2025-10-13T05:44:16.878993282Z" level=info msg="StartContainer for \"0dc54b4cc9ca5fe59ec78b6ae642355ef0676891735e166c6f9998dff830fc9f\"" Oct 13 05:44:16.882056 containerd[1633]: time="2025-10-13T05:44:16.882020105Z" level=info msg="connecting to shim 0dc54b4cc9ca5fe59ec78b6ae642355ef0676891735e166c6f9998dff830fc9f" address="unix:///run/containerd/s/da02adf82eaecead74a7bf06fac1b32e277c4928028acfc6836c8ba6ec6d09bf" protocol=ttrpc version=3 Oct 13 05:44:16.911183 systemd[1]: Started cri-containerd-0dc54b4cc9ca5fe59ec78b6ae642355ef0676891735e166c6f9998dff830fc9f.scope - libcontainer container 0dc54b4cc9ca5fe59ec78b6ae642355ef0676891735e166c6f9998dff830fc9f. Oct 13 05:44:16.955089 containerd[1633]: time="2025-10-13T05:44:16.955046800Z" level=info msg="StartContainer for \"0dc54b4cc9ca5fe59ec78b6ae642355ef0676891735e166c6f9998dff830fc9f\" returns successfully" Oct 13 05:44:16.957601 kubelet[2828]: I1013 05:44:16.957570 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5cx\" (UniqueName: \"kubernetes.io/projected/352c0a25-c481-41bc-99bb-1ac737a9b0ff-kube-api-access-mv5cx\") pod \"tigera-operator-755d956888-r76dh\" (UID: \"352c0a25-c481-41bc-99bb-1ac737a9b0ff\") " pod="tigera-operator/tigera-operator-755d956888-r76dh" Oct 13 05:44:16.957735 kubelet[2828]: I1013 05:44:16.957707 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/352c0a25-c481-41bc-99bb-1ac737a9b0ff-var-lib-calico\") pod \"tigera-operator-755d956888-r76dh\" (UID: \"352c0a25-c481-41bc-99bb-1ac737a9b0ff\") " pod="tigera-operator/tigera-operator-755d956888-r76dh" Oct 13 05:44:17.185083 containerd[1633]: time="2025-10-13T05:44:17.185011394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-r76dh,Uid:352c0a25-c481-41bc-99bb-1ac737a9b0ff,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:44:17.206951 containerd[1633]: time="2025-10-13T05:44:17.206898465Z" level=info msg="connecting to shim 99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d" address="unix:///run/containerd/s/a003f52fa2e37e19653e614f79f8aa2aad1c1fd112b1c1f950fb69f377ea9365" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:17.233041 systemd[1]: Started cri-containerd-99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d.scope - libcontainer container 99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d. Oct 13 05:44:17.286807 containerd[1633]: time="2025-10-13T05:44:17.285565395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-r76dh,Uid:352c0a25-c481-41bc-99bb-1ac737a9b0ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d\"" Oct 13 05:44:17.287057 containerd[1633]: time="2025-10-13T05:44:17.287027841Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:44:17.370861 kubelet[2828]: E1013 05:44:17.370816 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:17.642008 kubelet[2828]: E1013 05:44:17.641973 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:17.643033 kubelet[2828]: E1013 05:44:17.642980 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:17.669716 kubelet[2828]: I1013 05:44:17.669647 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-942sf" podStartSLOduration=2.669632612 podStartE2EDuration="2.669632612s" podCreationTimestamp="2025-10-13 05:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:44:17.669347876 +0000 UTC m=+9.135640137" watchObservedRunningTime="2025-10-13 05:44:17.669632612 +0000 UTC m=+9.135924873" Oct 13 05:44:18.557568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515291971.mount: Deactivated successfully. Oct 13 05:44:19.649790 containerd[1633]: time="2025-10-13T05:44:19.649722863Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:19.650979 containerd[1633]: time="2025-10-13T05:44:19.650945575Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:44:19.655473 containerd[1633]: time="2025-10-13T05:44:19.655416553Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:19.657595 containerd[1633]: time="2025-10-13T05:44:19.657548548Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:19.658107 containerd[1633]: time="2025-10-13T05:44:19.658071503Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.371011801s" Oct 13 05:44:19.658143 containerd[1633]: time="2025-10-13T05:44:19.658106779Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:44:19.659780 containerd[1633]: time="2025-10-13T05:44:19.659731989Z" level=info msg="CreateContainer within sandbox \"99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:44:19.668386 containerd[1633]: time="2025-10-13T05:44:19.668333674Z" level=info msg="Container e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:19.675207 containerd[1633]: time="2025-10-13T05:44:19.675167432Z" level=info msg="CreateContainer within sandbox \"99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3\"" Oct 13 05:44:19.675941 containerd[1633]: time="2025-10-13T05:44:19.675644279Z" level=info msg="StartContainer for \"e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3\"" Oct 13 05:44:19.676568 containerd[1633]: time="2025-10-13T05:44:19.676536068Z" level=info msg="connecting to shim e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3" address="unix:///run/containerd/s/a003f52fa2e37e19653e614f79f8aa2aad1c1fd112b1c1f950fb69f377ea9365" protocol=ttrpc version=3 Oct 13 05:44:19.737072 systemd[1]: Started cri-containerd-e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3.scope - libcontainer container e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3. Oct 13 05:44:19.766346 containerd[1633]: time="2025-10-13T05:44:19.766304478Z" level=info msg="StartContainer for \"e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3\" returns successfully" Oct 13 05:44:21.802902 systemd[1]: cri-containerd-e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3.scope: Deactivated successfully. Oct 13 05:44:21.807265 containerd[1633]: time="2025-10-13T05:44:21.807187527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3\" id:\"e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3\" pid:3157 exit_status:1 exited_at:{seconds:1760334261 nanos:806388764}" Oct 13 05:44:21.807265 containerd[1633]: time="2025-10-13T05:44:21.807253872Z" level=info msg="received exit event container_id:\"e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3\" id:\"e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3\" pid:3157 exit_status:1 exited_at:{seconds:1760334261 nanos:806388764}" Oct 13 05:44:21.841524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3-rootfs.mount: Deactivated successfully. Oct 13 05:44:22.206948 kubelet[2828]: E1013 05:44:22.206792 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:22.637645 kubelet[2828]: I1013 05:44:22.637482 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-r76dh" podStartSLOduration=4.265267787 podStartE2EDuration="6.637465352s" podCreationTimestamp="2025-10-13 05:44:16 +0000 UTC" firstStartedPulling="2025-10-13 05:44:17.28652867 +0000 UTC m=+8.752820931" lastFinishedPulling="2025-10-13 05:44:19.658726235 +0000 UTC m=+11.125018496" observedRunningTime="2025-10-13 05:44:20.655979288 +0000 UTC m=+12.122271539" watchObservedRunningTime="2025-10-13 05:44:22.637465352 +0000 UTC m=+14.103757613" Oct 13 05:44:22.653940 kubelet[2828]: E1013 05:44:22.653617 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:22.653940 kubelet[2828]: I1013 05:44:22.653667 2828 scope.go:117] "RemoveContainer" containerID="e072284352065c876d85f2738370b3154436c6b2c67d895c85bf99b9103e87b3" Oct 13 05:44:22.655585 containerd[1633]: time="2025-10-13T05:44:22.655538940Z" level=info msg="CreateContainer within sandbox \"99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 13 05:44:22.669239 containerd[1633]: time="2025-10-13T05:44:22.669189336Z" level=info msg="Container 585567b282fd200158ff94d24c6a6f934f8c3f5f53fc2b034652df9c45f67ae5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:22.676363 containerd[1633]: time="2025-10-13T05:44:22.676332257Z" level=info msg="CreateContainer within sandbox \"99522251caa7b677373848ab3d6248ee2d520978fd587bd8df2d0b098303361d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"585567b282fd200158ff94d24c6a6f934f8c3f5f53fc2b034652df9c45f67ae5\"" Oct 13 05:44:22.677072 containerd[1633]: time="2025-10-13T05:44:22.677045088Z" level=info msg="StartContainer for \"585567b282fd200158ff94d24c6a6f934f8c3f5f53fc2b034652df9c45f67ae5\"" Oct 13 05:44:22.678104 containerd[1633]: time="2025-10-13T05:44:22.678070777Z" level=info msg="connecting to shim 585567b282fd200158ff94d24c6a6f934f8c3f5f53fc2b034652df9c45f67ae5" address="unix:///run/containerd/s/a003f52fa2e37e19653e614f79f8aa2aad1c1fd112b1c1f950fb69f377ea9365" protocol=ttrpc version=3 Oct 13 05:44:22.704058 systemd[1]: Started cri-containerd-585567b282fd200158ff94d24c6a6f934f8c3f5f53fc2b034652df9c45f67ae5.scope - libcontainer container 585567b282fd200158ff94d24c6a6f934f8c3f5f53fc2b034652df9c45f67ae5. Oct 13 05:44:22.740958 containerd[1633]: time="2025-10-13T05:44:22.740900080Z" level=info msg="StartContainer for \"585567b282fd200158ff94d24c6a6f934f8c3f5f53fc2b034652df9c45f67ae5\" returns successfully" Oct 13 05:44:25.797698 sudo[1874]: pam_unix(sudo:session): session closed for user root Oct 13 05:44:25.799544 sshd[1873]: Connection closed by 10.0.0.1 port 57478 Oct 13 05:44:25.799953 sshd-session[1870]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:25.804492 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:57478.service: Deactivated successfully. Oct 13 05:44:25.806784 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:44:25.807044 systemd[1]: session-9.scope: Consumed 4.041s CPU time, 225.6M memory peak. Oct 13 05:44:25.808530 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:44:25.809789 systemd-logind[1609]: Removed session 9. Oct 13 05:44:28.384971 systemd[1]: Created slice kubepods-besteffort-pod2a6fdbbb_59ae_4e89_9fdd_79e78f402ce6.slice - libcontainer container kubepods-besteffort-pod2a6fdbbb_59ae_4e89_9fdd_79e78f402ce6.slice. Oct 13 05:44:28.436035 kubelet[2828]: I1013 05:44:28.435971 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6-tigera-ca-bundle\") pod \"calico-typha-5f46f49b8b-4v458\" (UID: \"2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6\") " pod="calico-system/calico-typha-5f46f49b8b-4v458" Oct 13 05:44:28.436035 kubelet[2828]: I1013 05:44:28.436018 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6-typha-certs\") pod \"calico-typha-5f46f49b8b-4v458\" (UID: \"2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6\") " pod="calico-system/calico-typha-5f46f49b8b-4v458" Oct 13 05:44:28.436035 kubelet[2828]: I1013 05:44:28.436038 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsrxc\" (UniqueName: \"kubernetes.io/projected/2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6-kube-api-access-bsrxc\") pod \"calico-typha-5f46f49b8b-4v458\" (UID: \"2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6\") " pod="calico-system/calico-typha-5f46f49b8b-4v458" Oct 13 05:44:28.696597 kubelet[2828]: E1013 05:44:28.696305 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:28.696961 containerd[1633]: time="2025-10-13T05:44:28.696721238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f46f49b8b-4v458,Uid:2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:28.733479 containerd[1633]: time="2025-10-13T05:44:28.733427139Z" level=info msg="connecting to shim 8a9272534ead3c5c1c13e6ec6bdaf8eec256bdca0f3dfaab9fd6bee627a0ec91" address="unix:///run/containerd/s/7962eaab99f7dd0eb7011469ef9019295b7b0e3f1021b7246ca64e9a61951397" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:28.768284 systemd[1]: Started cri-containerd-8a9272534ead3c5c1c13e6ec6bdaf8eec256bdca0f3dfaab9fd6bee627a0ec91.scope - libcontainer container 8a9272534ead3c5c1c13e6ec6bdaf8eec256bdca0f3dfaab9fd6bee627a0ec91. Oct 13 05:44:28.788119 systemd[1]: Created slice kubepods-besteffort-pod58b93a7d_d066_4dd9_99e5_5ef257a66896.slice - libcontainer container kubepods-besteffort-pod58b93a7d_d066_4dd9_99e5_5ef257a66896.slice. Oct 13 05:44:28.828321 containerd[1633]: time="2025-10-13T05:44:28.828274890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f46f49b8b-4v458,Uid:2a6fdbbb-59ae-4e89-9fdd-79e78f402ce6,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a9272534ead3c5c1c13e6ec6bdaf8eec256bdca0f3dfaab9fd6bee627a0ec91\"" Oct 13 05:44:28.829385 kubelet[2828]: E1013 05:44:28.828906 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:28.830308 containerd[1633]: time="2025-10-13T05:44:28.830285880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:44:28.838187 kubelet[2828]: I1013 05:44:28.838137 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-cni-bin-dir\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838187 kubelet[2828]: I1013 05:44:28.838178 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-lib-modules\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838299 kubelet[2828]: I1013 05:44:28.838193 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-var-run-calico\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838299 kubelet[2828]: I1013 05:44:28.838211 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-flexvol-driver-host\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838299 kubelet[2828]: I1013 05:44:28.838233 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58b93a7d-d066-4dd9-99e5-5ef257a66896-tigera-ca-bundle\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838299 kubelet[2828]: I1013 05:44:28.838254 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-xtables-lock\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838299 kubelet[2828]: I1013 05:44:28.838272 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/58b93a7d-d066-4dd9-99e5-5ef257a66896-node-certs\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838521 kubelet[2828]: I1013 05:44:28.838288 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-policysync\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838521 kubelet[2828]: I1013 05:44:28.838311 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzlf\" (UniqueName: \"kubernetes.io/projected/58b93a7d-d066-4dd9-99e5-5ef257a66896-kube-api-access-fgzlf\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838521 kubelet[2828]: I1013 05:44:28.838353 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-cni-log-dir\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838521 kubelet[2828]: I1013 05:44:28.838369 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-var-lib-calico\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.838521 kubelet[2828]: I1013 05:44:28.838387 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/58b93a7d-d066-4dd9-99e5-5ef257a66896-cni-net-dir\") pod \"calico-node-q42wc\" (UID: \"58b93a7d-d066-4dd9-99e5-5ef257a66896\") " pod="calico-system/calico-node-q42wc" Oct 13 05:44:28.947417 kubelet[2828]: E1013 05:44:28.947287 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:28.947417 kubelet[2828]: W1013 05:44:28.947312 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:28.948069 kubelet[2828]: E1013 05:44:28.947418 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:28.949634 kubelet[2828]: E1013 05:44:28.949598 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:28.949634 kubelet[2828]: W1013 05:44:28.949618 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:28.949634 kubelet[2828]: E1013 05:44:28.949638 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.010876 kubelet[2828]: E1013 05:44:29.010788 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj54f" podUID="02edaea7-f337-4fcf-9037-ac41cfab2259" Oct 13 05:44:29.022467 kubelet[2828]: E1013 05:44:29.022425 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.022467 kubelet[2828]: W1013 05:44:29.022448 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.022467 kubelet[2828]: E1013 05:44:29.022469 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.022722 kubelet[2828]: E1013 05:44:29.022705 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.022722 kubelet[2828]: W1013 05:44:29.022716 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.022779 kubelet[2828]: E1013 05:44:29.022726 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.022979 kubelet[2828]: E1013 05:44:29.022955 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.022979 kubelet[2828]: W1013 05:44:29.022973 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.022979 kubelet[2828]: E1013 05:44:29.022982 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.023257 kubelet[2828]: E1013 05:44:29.023233 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.023257 kubelet[2828]: W1013 05:44:29.023248 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.023257 kubelet[2828]: E1013 05:44:29.023257 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.023495 kubelet[2828]: E1013 05:44:29.023471 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.023495 kubelet[2828]: W1013 05:44:29.023486 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.023495 kubelet[2828]: E1013 05:44:29.023495 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.023701 kubelet[2828]: E1013 05:44:29.023678 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.023701 kubelet[2828]: W1013 05:44:29.023693 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.023701 kubelet[2828]: E1013 05:44:29.023701 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.023894 kubelet[2828]: E1013 05:44:29.023870 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.023894 kubelet[2828]: W1013 05:44:29.023884 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.023894 kubelet[2828]: E1013 05:44:29.023891 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.024585 kubelet[2828]: E1013 05:44:29.024102 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.024585 kubelet[2828]: W1013 05:44:29.024114 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.024585 kubelet[2828]: E1013 05:44:29.024122 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.024585 kubelet[2828]: E1013 05:44:29.024406 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.024585 kubelet[2828]: W1013 05:44:29.024414 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.024585 kubelet[2828]: E1013 05:44:29.024423 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.024744 kubelet[2828]: E1013 05:44:29.024598 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.024744 kubelet[2828]: W1013 05:44:29.024606 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.024744 kubelet[2828]: E1013 05:44:29.024613 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.024817 kubelet[2828]: E1013 05:44:29.024768 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.024817 kubelet[2828]: W1013 05:44:29.024776 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.024817 kubelet[2828]: E1013 05:44:29.024783 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.024975 kubelet[2828]: E1013 05:44:29.024956 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.024975 kubelet[2828]: W1013 05:44:29.024966 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.024975 kubelet[2828]: E1013 05:44:29.024974 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.025166 kubelet[2828]: E1013 05:44:29.025148 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.025166 kubelet[2828]: W1013 05:44:29.025158 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.025166 kubelet[2828]: E1013 05:44:29.025166 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.025341 kubelet[2828]: E1013 05:44:29.025325 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.025341 kubelet[2828]: W1013 05:44:29.025335 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.025341 kubelet[2828]: E1013 05:44:29.025342 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.025539 kubelet[2828]: E1013 05:44:29.025525 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.025539 kubelet[2828]: W1013 05:44:29.025534 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.025581 kubelet[2828]: E1013 05:44:29.025542 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.025731 kubelet[2828]: E1013 05:44:29.025701 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.025731 kubelet[2828]: W1013 05:44:29.025717 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.025731 kubelet[2828]: E1013 05:44:29.025725 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.025905 kubelet[2828]: E1013 05:44:29.025890 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.025905 kubelet[2828]: W1013 05:44:29.025899 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.025977 kubelet[2828]: E1013 05:44:29.025909 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.026112 kubelet[2828]: E1013 05:44:29.026094 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.026112 kubelet[2828]: W1013 05:44:29.026104 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.026112 kubelet[2828]: E1013 05:44:29.026112 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.026282 kubelet[2828]: E1013 05:44:29.026267 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.026282 kubelet[2828]: W1013 05:44:29.026277 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.026282 kubelet[2828]: E1013 05:44:29.026284 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.026464 kubelet[2828]: E1013 05:44:29.026448 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.026464 kubelet[2828]: W1013 05:44:29.026458 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.026519 kubelet[2828]: E1013 05:44:29.026466 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.040017 kubelet[2828]: E1013 05:44:29.039977 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.040017 kubelet[2828]: W1013 05:44:29.040008 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.040121 kubelet[2828]: E1013 05:44:29.040036 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.040121 kubelet[2828]: I1013 05:44:29.040073 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/02edaea7-f337-4fcf-9037-ac41cfab2259-registration-dir\") pod \"csi-node-driver-vj54f\" (UID: \"02edaea7-f337-4fcf-9037-ac41cfab2259\") " pod="calico-system/csi-node-driver-vj54f" Oct 13 05:44:29.040349 kubelet[2828]: E1013 05:44:29.040329 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.040349 kubelet[2828]: W1013 05:44:29.040342 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.040444 kubelet[2828]: E1013 05:44:29.040358 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.040444 kubelet[2828]: I1013 05:44:29.040399 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/02edaea7-f337-4fcf-9037-ac41cfab2259-socket-dir\") pod \"csi-node-driver-vj54f\" (UID: \"02edaea7-f337-4fcf-9037-ac41cfab2259\") " pod="calico-system/csi-node-driver-vj54f" Oct 13 05:44:29.040668 kubelet[2828]: E1013 05:44:29.040632 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.040668 kubelet[2828]: W1013 05:44:29.040665 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.040759 kubelet[2828]: E1013 05:44:29.040693 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.040887 kubelet[2828]: E1013 05:44:29.040868 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.040887 kubelet[2828]: W1013 05:44:29.040879 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.041011 kubelet[2828]: E1013 05:44:29.040894 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.041379 kubelet[2828]: E1013 05:44:29.041169 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.041379 kubelet[2828]: W1013 05:44:29.041211 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.041379 kubelet[2828]: E1013 05:44:29.041227 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.041379 kubelet[2828]: I1013 05:44:29.041243 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swztf\" (UniqueName: \"kubernetes.io/projected/02edaea7-f337-4fcf-9037-ac41cfab2259-kube-api-access-swztf\") pod \"csi-node-driver-vj54f\" (UID: \"02edaea7-f337-4fcf-9037-ac41cfab2259\") " pod="calico-system/csi-node-driver-vj54f" Oct 13 05:44:29.041619 kubelet[2828]: E1013 05:44:29.041600 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.041619 kubelet[2828]: W1013 05:44:29.041613 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.041673 kubelet[2828]: E1013 05:44:29.041627 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.041673 kubelet[2828]: I1013 05:44:29.041643 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02edaea7-f337-4fcf-9037-ac41cfab2259-kubelet-dir\") pod \"csi-node-driver-vj54f\" (UID: \"02edaea7-f337-4fcf-9037-ac41cfab2259\") " pod="calico-system/csi-node-driver-vj54f" Oct 13 05:44:29.041851 kubelet[2828]: E1013 05:44:29.041832 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.041851 kubelet[2828]: W1013 05:44:29.041843 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.041914 kubelet[2828]: E1013 05:44:29.041873 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.041971 kubelet[2828]: I1013 05:44:29.041911 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/02edaea7-f337-4fcf-9037-ac41cfab2259-varrun\") pod \"csi-node-driver-vj54f\" (UID: \"02edaea7-f337-4fcf-9037-ac41cfab2259\") " pod="calico-system/csi-node-driver-vj54f" Oct 13 05:44:29.042138 kubelet[2828]: E1013 05:44:29.042104 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.042138 kubelet[2828]: W1013 05:44:29.042122 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.042214 kubelet[2828]: E1013 05:44:29.042163 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.042350 kubelet[2828]: E1013 05:44:29.042329 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.042350 kubelet[2828]: W1013 05:44:29.042346 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.042411 kubelet[2828]: E1013 05:44:29.042365 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.042632 kubelet[2828]: E1013 05:44:29.042607 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.042632 kubelet[2828]: W1013 05:44:29.042623 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.042716 kubelet[2828]: E1013 05:44:29.042641 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.042841 kubelet[2828]: E1013 05:44:29.042825 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.042841 kubelet[2828]: W1013 05:44:29.042835 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.042893 kubelet[2828]: E1013 05:44:29.042848 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.043078 kubelet[2828]: E1013 05:44:29.043055 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.043078 kubelet[2828]: W1013 05:44:29.043069 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.043078 kubelet[2828]: E1013 05:44:29.043079 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.043273 kubelet[2828]: E1013 05:44:29.043256 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.043273 kubelet[2828]: W1013 05:44:29.043267 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.043330 kubelet[2828]: E1013 05:44:29.043276 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.043477 kubelet[2828]: E1013 05:44:29.043459 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.043477 kubelet[2828]: W1013 05:44:29.043469 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.043477 kubelet[2828]: E1013 05:44:29.043477 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.043658 kubelet[2828]: E1013 05:44:29.043640 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.043658 kubelet[2828]: W1013 05:44:29.043650 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.043658 kubelet[2828]: E1013 05:44:29.043658 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.091657 containerd[1633]: time="2025-10-13T05:44:29.091613168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q42wc,Uid:58b93a7d-d066-4dd9-99e5-5ef257a66896,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:29.119138 containerd[1633]: time="2025-10-13T05:44:29.119085104Z" level=info msg="connecting to shim eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483" address="unix:///run/containerd/s/455bcce514bd53b3d4cb135eab3643e32d0f2a7331cfec10a9e3f1ecaaf51b21" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:29.144454 kubelet[2828]: E1013 05:44:29.144409 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.144454 kubelet[2828]: W1013 05:44:29.144432 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.144454 kubelet[2828]: E1013 05:44:29.144452 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.144883 kubelet[2828]: E1013 05:44:29.144865 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.144883 kubelet[2828]: W1013 05:44:29.144877 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.144991 kubelet[2828]: E1013 05:44:29.144899 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.145305 kubelet[2828]: E1013 05:44:29.145278 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.145305 kubelet[2828]: W1013 05:44:29.145292 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.145356 kubelet[2828]: E1013 05:44:29.145322 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.145757 kubelet[2828]: E1013 05:44:29.145731 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.145757 kubelet[2828]: W1013 05:44:29.145745 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.145814 kubelet[2828]: E1013 05:44:29.145767 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.150272 kubelet[2828]: E1013 05:44:29.150039 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.150272 kubelet[2828]: W1013 05:44:29.150058 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.150272 kubelet[2828]: E1013 05:44:29.150107 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.151107 kubelet[2828]: E1013 05:44:29.151081 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.151107 kubelet[2828]: W1013 05:44:29.151098 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.151171 kubelet[2828]: E1013 05:44:29.151154 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.152991 kubelet[2828]: E1013 05:44:29.151850 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.152991 kubelet[2828]: W1013 05:44:29.152982 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.153192 kubelet[2828]: E1013 05:44:29.153041 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.154062 kubelet[2828]: E1013 05:44:29.154037 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.154062 kubelet[2828]: W1013 05:44:29.154055 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.154164 kubelet[2828]: E1013 05:44:29.154146 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.154276 kubelet[2828]: E1013 05:44:29.154255 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.154276 kubelet[2828]: W1013 05:44:29.154269 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.154340 kubelet[2828]: E1013 05:44:29.154321 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.154551 kubelet[2828]: E1013 05:44:29.154531 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.154551 kubelet[2828]: W1013 05:44:29.154546 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.154698 systemd[1]: Started cri-containerd-eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483.scope - libcontainer container eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483. Oct 13 05:44:29.155993 kubelet[2828]: E1013 05:44:29.155969 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.156851 kubelet[2828]: E1013 05:44:29.156776 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.156851 kubelet[2828]: W1013 05:44:29.156790 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.156851 kubelet[2828]: E1013 05:44:29.156848 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.157243 kubelet[2828]: E1013 05:44:29.157156 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.157243 kubelet[2828]: W1013 05:44:29.157169 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.157319 kubelet[2828]: E1013 05:44:29.157239 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.157945 kubelet[2828]: E1013 05:44:29.157380 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.157945 kubelet[2828]: W1013 05:44:29.157405 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.158045 kubelet[2828]: E1013 05:44:29.157968 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.158267 kubelet[2828]: E1013 05:44:29.158185 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.158267 kubelet[2828]: W1013 05:44:29.158204 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.158321 kubelet[2828]: E1013 05:44:29.158288 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.160025 kubelet[2828]: E1013 05:44:29.159990 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.160025 kubelet[2828]: W1013 05:44:29.160008 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.160105 kubelet[2828]: E1013 05:44:29.160063 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.160277 kubelet[2828]: E1013 05:44:29.160242 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.160277 kubelet[2828]: W1013 05:44:29.160256 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.160325 kubelet[2828]: E1013 05:44:29.160305 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.160529 kubelet[2828]: E1013 05:44:29.160482 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.160529 kubelet[2828]: W1013 05:44:29.160489 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.160909 kubelet[2828]: E1013 05:44:29.160538 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.160909 kubelet[2828]: E1013 05:44:29.160735 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.160909 kubelet[2828]: W1013 05:44:29.160743 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.160997 kubelet[2828]: E1013 05:44:29.160975 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.161050 kubelet[2828]: E1013 05:44:29.161044 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.161102 kubelet[2828]: W1013 05:44:29.161051 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.161156 kubelet[2828]: E1013 05:44:29.161133 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.162135 kubelet[2828]: E1013 05:44:29.162115 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.162135 kubelet[2828]: W1013 05:44:29.162129 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.162197 kubelet[2828]: E1013 05:44:29.162151 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.162444 kubelet[2828]: E1013 05:44:29.162423 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.162444 kubelet[2828]: W1013 05:44:29.162438 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.162520 kubelet[2828]: E1013 05:44:29.162492 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.163995 kubelet[2828]: E1013 05:44:29.163966 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.163995 kubelet[2828]: W1013 05:44:29.163983 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.166004 kubelet[2828]: E1013 05:44:29.164060 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.166004 kubelet[2828]: E1013 05:44:29.164203 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.166004 kubelet[2828]: W1013 05:44:29.164210 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.166004 kubelet[2828]: E1013 05:44:29.164290 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.166004 kubelet[2828]: E1013 05:44:29.164433 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.166004 kubelet[2828]: W1013 05:44:29.164440 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.166004 kubelet[2828]: E1013 05:44:29.164519 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.166170 kubelet[2828]: E1013 05:44:29.166087 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.166170 kubelet[2828]: W1013 05:44:29.166097 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.166170 kubelet[2828]: E1013 05:44:29.166105 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.178031 kubelet[2828]: E1013 05:44:29.177983 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:29.178031 kubelet[2828]: W1013 05:44:29.178017 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:29.178031 kubelet[2828]: E1013 05:44:29.178045 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:29.224066 containerd[1633]: time="2025-10-13T05:44:29.222660998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q42wc,Uid:58b93a7d-d066-4dd9-99e5-5ef257a66896,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483\"" Oct 13 05:44:30.170054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675902135.mount: Deactivated successfully. Oct 13 05:44:30.617605 kubelet[2828]: E1013 05:44:30.617409 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj54f" podUID="02edaea7-f337-4fcf-9037-ac41cfab2259" Oct 13 05:44:31.928127 containerd[1633]: time="2025-10-13T05:44:31.928067422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:31.928901 containerd[1633]: time="2025-10-13T05:44:31.928876522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:44:31.930150 containerd[1633]: time="2025-10-13T05:44:31.930118486Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:31.932382 containerd[1633]: time="2025-10-13T05:44:31.932337055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:31.932810 containerd[1633]: time="2025-10-13T05:44:31.932766372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.102455274s" Oct 13 05:44:31.932810 containerd[1633]: time="2025-10-13T05:44:31.932806667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:44:31.933820 containerd[1633]: time="2025-10-13T05:44:31.933766972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:44:31.948627 containerd[1633]: time="2025-10-13T05:44:31.948592863Z" level=info msg="CreateContainer within sandbox \"8a9272534ead3c5c1c13e6ec6bdaf8eec256bdca0f3dfaab9fd6bee627a0ec91\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:44:31.956756 containerd[1633]: time="2025-10-13T05:44:31.956710648Z" level=info msg="Container 9b3e0d05904fd583534d7f46310a2f32291106ce3518b7046dead4996e54c06d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:31.963150 containerd[1633]: time="2025-10-13T05:44:31.963102549Z" level=info msg="CreateContainer within sandbox \"8a9272534ead3c5c1c13e6ec6bdaf8eec256bdca0f3dfaab9fd6bee627a0ec91\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9b3e0d05904fd583534d7f46310a2f32291106ce3518b7046dead4996e54c06d\"" Oct 13 05:44:31.963591 containerd[1633]: time="2025-10-13T05:44:31.963565881Z" level=info msg="StartContainer for \"9b3e0d05904fd583534d7f46310a2f32291106ce3518b7046dead4996e54c06d\"" Oct 13 05:44:31.964576 containerd[1633]: time="2025-10-13T05:44:31.964532637Z" level=info msg="connecting to shim 9b3e0d05904fd583534d7f46310a2f32291106ce3518b7046dead4996e54c06d" address="unix:///run/containerd/s/7962eaab99f7dd0eb7011469ef9019295b7b0e3f1021b7246ca64e9a61951397" protocol=ttrpc version=3 Oct 13 05:44:31.992073 systemd[1]: Started cri-containerd-9b3e0d05904fd583534d7f46310a2f32291106ce3518b7046dead4996e54c06d.scope - libcontainer container 9b3e0d05904fd583534d7f46310a2f32291106ce3518b7046dead4996e54c06d. Oct 13 05:44:32.048716 containerd[1633]: time="2025-10-13T05:44:32.048675227Z" level=info msg="StartContainer for \"9b3e0d05904fd583534d7f46310a2f32291106ce3518b7046dead4996e54c06d\" returns successfully" Oct 13 05:44:32.617367 kubelet[2828]: E1013 05:44:32.617283 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj54f" podUID="02edaea7-f337-4fcf-9037-ac41cfab2259" Oct 13 05:44:32.676704 kubelet[2828]: E1013 05:44:32.676666 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:32.688334 kubelet[2828]: I1013 05:44:32.688252 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f46f49b8b-4v458" podStartSLOduration=1.584373171 podStartE2EDuration="4.68823602s" podCreationTimestamp="2025-10-13 05:44:28 +0000 UTC" firstStartedPulling="2025-10-13 05:44:28.829802642 +0000 UTC m=+20.296094903" lastFinishedPulling="2025-10-13 05:44:31.933665501 +0000 UTC m=+23.399957752" observedRunningTime="2025-10-13 05:44:32.686787709 +0000 UTC m=+24.153079970" watchObservedRunningTime="2025-10-13 05:44:32.68823602 +0000 UTC m=+24.154528281" Oct 13 05:44:32.752257 kubelet[2828]: E1013 05:44:32.752206 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.752257 kubelet[2828]: W1013 05:44:32.752235 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.752257 kubelet[2828]: E1013 05:44:32.752258 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.752511 kubelet[2828]: E1013 05:44:32.752494 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.752511 kubelet[2828]: W1013 05:44:32.752505 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.752576 kubelet[2828]: E1013 05:44:32.752513 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.752752 kubelet[2828]: E1013 05:44:32.752726 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.752752 kubelet[2828]: W1013 05:44:32.752739 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.752752 kubelet[2828]: E1013 05:44:32.752749 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.753030 kubelet[2828]: E1013 05:44:32.753010 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.753030 kubelet[2828]: W1013 05:44:32.753022 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.753030 kubelet[2828]: E1013 05:44:32.753030 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.753232 kubelet[2828]: E1013 05:44:32.753201 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.753232 kubelet[2828]: W1013 05:44:32.753216 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.753232 kubelet[2828]: E1013 05:44:32.753226 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.753422 kubelet[2828]: E1013 05:44:32.753404 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.753422 kubelet[2828]: W1013 05:44:32.753414 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.753422 kubelet[2828]: E1013 05:44:32.753422 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.753641 kubelet[2828]: E1013 05:44:32.753624 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.753641 kubelet[2828]: W1013 05:44:32.753637 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.753695 kubelet[2828]: E1013 05:44:32.753645 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.753893 kubelet[2828]: E1013 05:44:32.753862 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.753946 kubelet[2828]: W1013 05:44:32.753895 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.753946 kubelet[2828]: E1013 05:44:32.753937 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.754231 kubelet[2828]: E1013 05:44:32.754213 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.754231 kubelet[2828]: W1013 05:44:32.754227 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.754299 kubelet[2828]: E1013 05:44:32.754238 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.754474 kubelet[2828]: E1013 05:44:32.754458 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.754474 kubelet[2828]: W1013 05:44:32.754469 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.754708 kubelet[2828]: E1013 05:44:32.754477 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.754708 kubelet[2828]: E1013 05:44:32.754656 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.754708 kubelet[2828]: W1013 05:44:32.754664 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.754708 kubelet[2828]: E1013 05:44:32.754672 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.754871 kubelet[2828]: E1013 05:44:32.754853 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.754871 kubelet[2828]: W1013 05:44:32.754864 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.754871 kubelet[2828]: E1013 05:44:32.754872 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.755095 kubelet[2828]: E1013 05:44:32.755075 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.755095 kubelet[2828]: W1013 05:44:32.755086 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.755095 kubelet[2828]: E1013 05:44:32.755094 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.755279 kubelet[2828]: E1013 05:44:32.755263 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.755279 kubelet[2828]: W1013 05:44:32.755276 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.755347 kubelet[2828]: E1013 05:44:32.755284 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.755504 kubelet[2828]: E1013 05:44:32.755479 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.755504 kubelet[2828]: W1013 05:44:32.755490 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.755504 kubelet[2828]: E1013 05:44:32.755498 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.779608 kubelet[2828]: E1013 05:44:32.779584 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.779608 kubelet[2828]: W1013 05:44:32.779600 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.779608 kubelet[2828]: E1013 05:44:32.779611 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.779864 kubelet[2828]: E1013 05:44:32.779826 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.779864 kubelet[2828]: W1013 05:44:32.779841 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.779864 kubelet[2828]: E1013 05:44:32.779855 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.780101 kubelet[2828]: E1013 05:44:32.780086 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.780101 kubelet[2828]: W1013 05:44:32.780094 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.780157 kubelet[2828]: E1013 05:44:32.780109 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.780355 kubelet[2828]: E1013 05:44:32.780329 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.780355 kubelet[2828]: W1013 05:44:32.780343 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.780433 kubelet[2828]: E1013 05:44:32.780360 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.780556 kubelet[2828]: E1013 05:44:32.780539 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.780556 kubelet[2828]: W1013 05:44:32.780550 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.780609 kubelet[2828]: E1013 05:44:32.780565 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.780749 kubelet[2828]: E1013 05:44:32.780726 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.780749 kubelet[2828]: W1013 05:44:32.780738 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.780807 kubelet[2828]: E1013 05:44:32.780750 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.781044 kubelet[2828]: E1013 05:44:32.781015 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.781086 kubelet[2828]: W1013 05:44:32.781041 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.781086 kubelet[2828]: E1013 05:44:32.781069 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.781284 kubelet[2828]: E1013 05:44:32.781268 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.781284 kubelet[2828]: W1013 05:44:32.781278 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.781353 kubelet[2828]: E1013 05:44:32.781320 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.781488 kubelet[2828]: E1013 05:44:32.781472 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.781488 kubelet[2828]: W1013 05:44:32.781484 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.781535 kubelet[2828]: E1013 05:44:32.781518 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.781729 kubelet[2828]: E1013 05:44:32.781712 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.781729 kubelet[2828]: W1013 05:44:32.781725 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.781780 kubelet[2828]: E1013 05:44:32.781740 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.782010 kubelet[2828]: E1013 05:44:32.781991 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.782010 kubelet[2828]: W1013 05:44:32.782006 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.782078 kubelet[2828]: E1013 05:44:32.782023 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.782231 kubelet[2828]: E1013 05:44:32.782215 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.782231 kubelet[2828]: W1013 05:44:32.782226 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.782282 kubelet[2828]: E1013 05:44:32.782239 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.782511 kubelet[2828]: E1013 05:44:32.782492 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.782511 kubelet[2828]: W1013 05:44:32.782506 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.782567 kubelet[2828]: E1013 05:44:32.782521 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.782728 kubelet[2828]: E1013 05:44:32.782712 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.782728 kubelet[2828]: W1013 05:44:32.782723 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.782782 kubelet[2828]: E1013 05:44:32.782739 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.782955 kubelet[2828]: E1013 05:44:32.782939 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.782955 kubelet[2828]: W1013 05:44:32.782950 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.783026 kubelet[2828]: E1013 05:44:32.782964 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.783195 kubelet[2828]: E1013 05:44:32.783170 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.783195 kubelet[2828]: W1013 05:44:32.783186 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.783285 kubelet[2828]: E1013 05:44:32.783212 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.783444 kubelet[2828]: E1013 05:44:32.783420 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.783444 kubelet[2828]: W1013 05:44:32.783435 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.783489 kubelet[2828]: E1013 05:44:32.783448 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:32.783651 kubelet[2828]: E1013 05:44:32.783632 2828 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:44:32.783651 kubelet[2828]: W1013 05:44:32.783642 2828 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:44:32.783651 kubelet[2828]: E1013 05:44:32.783650 2828 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:44:33.232407 containerd[1633]: time="2025-10-13T05:44:33.232342974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:33.233158 containerd[1633]: time="2025-10-13T05:44:33.233108552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:44:33.234190 containerd[1633]: time="2025-10-13T05:44:33.234144549Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:33.235982 containerd[1633]: time="2025-10-13T05:44:33.235953367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:33.236498 containerd[1633]: time="2025-10-13T05:44:33.236462053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.302665746s" Oct 13 05:44:33.236552 containerd[1633]: time="2025-10-13T05:44:33.236501687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:44:33.238685 containerd[1633]: time="2025-10-13T05:44:33.238644744Z" level=info msg="CreateContainer within sandbox \"eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:44:33.247110 containerd[1633]: time="2025-10-13T05:44:33.247059854Z" level=info msg="Container 4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:33.254672 containerd[1633]: time="2025-10-13T05:44:33.254629937Z" level=info msg="CreateContainer within sandbox \"eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5\"" Oct 13 05:44:33.255124 containerd[1633]: time="2025-10-13T05:44:33.255003449Z" level=info msg="StartContainer for \"4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5\"" Oct 13 05:44:33.256356 containerd[1633]: time="2025-10-13T05:44:33.256317728Z" level=info msg="connecting to shim 4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5" address="unix:///run/containerd/s/455bcce514bd53b3d4cb135eab3643e32d0f2a7331cfec10a9e3f1ecaaf51b21" protocol=ttrpc version=3 Oct 13 05:44:33.281069 systemd[1]: Started cri-containerd-4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5.scope - libcontainer container 4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5. Oct 13 05:44:33.325400 containerd[1633]: time="2025-10-13T05:44:33.325356980Z" level=info msg="StartContainer for \"4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5\" returns successfully" Oct 13 05:44:33.335548 systemd[1]: cri-containerd-4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5.scope: Deactivated successfully. Oct 13 05:44:33.338715 containerd[1633]: time="2025-10-13T05:44:33.338617985Z" level=info msg="received exit event container_id:\"4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5\" id:\"4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5\" pid:3557 exited_at:{seconds:1760334273 nanos:338328972}" Oct 13 05:44:33.338715 containerd[1633]: time="2025-10-13T05:44:33.338672387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5\" id:\"4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5\" pid:3557 exited_at:{seconds:1760334273 nanos:338328972}" Oct 13 05:44:33.360501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c7e0b96c52dfd707544b5f80afb123bd33c773c80a9a885a794d70dc075ece5-rootfs.mount: Deactivated successfully. Oct 13 05:44:33.679609 kubelet[2828]: E1013 05:44:33.679572 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:34.617991 kubelet[2828]: E1013 05:44:34.617895 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj54f" podUID="02edaea7-f337-4fcf-9037-ac41cfab2259" Oct 13 05:44:34.683112 kubelet[2828]: E1013 05:44:34.682916 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:34.683627 containerd[1633]: time="2025-10-13T05:44:34.683525451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:44:36.617951 kubelet[2828]: E1013 05:44:36.617690 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj54f" podUID="02edaea7-f337-4fcf-9037-ac41cfab2259" Oct 13 05:44:37.480249 containerd[1633]: time="2025-10-13T05:44:37.480177274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:37.480881 containerd[1633]: time="2025-10-13T05:44:37.480858082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:44:37.482009 containerd[1633]: time="2025-10-13T05:44:37.481967226Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:37.483881 containerd[1633]: time="2025-10-13T05:44:37.483848680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:37.484483 containerd[1633]: time="2025-10-13T05:44:37.484438197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.800873312s" Oct 13 05:44:37.484483 containerd[1633]: time="2025-10-13T05:44:37.484476669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:44:37.486739 containerd[1633]: time="2025-10-13T05:44:37.486694234Z" level=info msg="CreateContainer within sandbox \"eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:44:37.495003 containerd[1633]: time="2025-10-13T05:44:37.494955790Z" level=info msg="Container e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:37.503583 containerd[1633]: time="2025-10-13T05:44:37.503535675Z" level=info msg="CreateContainer within sandbox \"eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485\"" Oct 13 05:44:37.504088 containerd[1633]: time="2025-10-13T05:44:37.504054851Z" level=info msg="StartContainer for \"e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485\"" Oct 13 05:44:37.505399 containerd[1633]: time="2025-10-13T05:44:37.505375270Z" level=info msg="connecting to shim e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485" address="unix:///run/containerd/s/455bcce514bd53b3d4cb135eab3643e32d0f2a7331cfec10a9e3f1ecaaf51b21" protocol=ttrpc version=3 Oct 13 05:44:37.536106 systemd[1]: Started cri-containerd-e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485.scope - libcontainer container e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485. Oct 13 05:44:37.580462 containerd[1633]: time="2025-10-13T05:44:37.580412375Z" level=info msg="StartContainer for \"e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485\" returns successfully" Oct 13 05:44:38.617904 kubelet[2828]: E1013 05:44:38.617835 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj54f" podUID="02edaea7-f337-4fcf-9037-ac41cfab2259" Oct 13 05:44:38.953279 systemd[1]: cri-containerd-e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485.scope: Deactivated successfully. Oct 13 05:44:38.953641 systemd[1]: cri-containerd-e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485.scope: Consumed 647ms CPU time, 178.4M memory peak, 6.5M read from disk, 171.3M written to disk. Oct 13 05:44:38.955140 containerd[1633]: time="2025-10-13T05:44:38.954994896Z" level=info msg="received exit event container_id:\"e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485\" id:\"e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485\" pid:3617 exited_at:{seconds:1760334278 nanos:954752400}" Oct 13 05:44:38.955140 containerd[1633]: time="2025-10-13T05:44:38.955104111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485\" id:\"e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485\" pid:3617 exited_at:{seconds:1760334278 nanos:954752400}" Oct 13 05:44:38.977972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8e95e6159a4266a5ebca7aa0dd07926b2f58ac3e066ff5af9ba495e7065f485-rootfs.mount: Deactivated successfully. Oct 13 05:44:39.011462 kubelet[2828]: I1013 05:44:39.011421 2828 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:44:39.993684 systemd[1]: Created slice kubepods-burstable-pod72f7196b_78fc_4617_9b3e_c6a93eee680d.slice - libcontainer container kubepods-burstable-pod72f7196b_78fc_4617_9b3e_c6a93eee680d.slice. Oct 13 05:44:40.003827 systemd[1]: Created slice kubepods-besteffort-podd8d8e227_5e2a_4e22_a4b4_3c2d6d43d4a3.slice - libcontainer container kubepods-besteffort-podd8d8e227_5e2a_4e22_a4b4_3c2d6d43d4a3.slice. Oct 13 05:44:40.012026 systemd[1]: Created slice kubepods-besteffort-podd24849a4_8350_497f_a0b6_cfff5b84fbf1.slice - libcontainer container kubepods-besteffort-podd24849a4_8350_497f_a0b6_cfff5b84fbf1.slice. Oct 13 05:44:40.018580 systemd[1]: Created slice kubepods-besteffort-pod602336c3_e32d_4a73_9d7d_d8429c2ec34b.slice - libcontainer container kubepods-besteffort-pod602336c3_e32d_4a73_9d7d_d8429c2ec34b.slice. Oct 13 05:44:40.023846 kubelet[2828]: I1013 05:44:40.023808 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf14e942-f41d-4f00-b058-07c5501bf435-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-blgvw\" (UID: \"cf14e942-f41d-4f00-b058-07c5501bf435\") " pod="calico-system/goldmane-54d579b49d-blgvw" Oct 13 05:44:40.023846 kubelet[2828]: I1013 05:44:40.023846 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882l4\" (UniqueName: \"kubernetes.io/projected/92e7d7ad-68d1-42ab-914c-51196ff43384-kube-api-access-882l4\") pod \"calico-apiserver-5f56546f6c-jppm5\" (UID: \"92e7d7ad-68d1-42ab-914c-51196ff43384\") " pod="calico-apiserver/calico-apiserver-5f56546f6c-jppm5" Oct 13 05:44:40.024276 kubelet[2828]: I1013 05:44:40.023863 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfqm\" (UniqueName: \"kubernetes.io/projected/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-kube-api-access-tnfqm\") pod \"calico-apiserver-5f56546f6c-x5lcg\" (UID: \"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd\") " pod="calico-apiserver/calico-apiserver-5f56546f6c-x5lcg" Oct 13 05:44:40.024276 kubelet[2828]: I1013 05:44:40.023878 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76606c85-282e-4768-97b9-db0bcdb8b7da-config-volume\") pod \"coredns-668d6bf9bc-prlrf\" (UID: \"76606c85-282e-4768-97b9-db0bcdb8b7da\") " pod="kube-system/coredns-668d6bf9bc-prlrf" Oct 13 05:44:40.024276 kubelet[2828]: I1013 05:44:40.023909 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/92e7d7ad-68d1-42ab-914c-51196ff43384-calico-apiserver-certs\") pod \"calico-apiserver-5f56546f6c-jppm5\" (UID: \"92e7d7ad-68d1-42ab-914c-51196ff43384\") " pod="calico-apiserver/calico-apiserver-5f56546f6c-jppm5" Oct 13 05:44:40.024276 kubelet[2828]: I1013 05:44:40.023938 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72f7196b-78fc-4617-9b3e-c6a93eee680d-config-volume\") pod \"coredns-668d6bf9bc-v8qr9\" (UID: \"72f7196b-78fc-4617-9b3e-c6a93eee680d\") " pod="kube-system/coredns-668d6bf9bc-v8qr9" Oct 13 05:44:40.024276 kubelet[2828]: I1013 05:44:40.023957 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-ca-bundle\") pod \"whisker-6b6cc98bdf-tmq2w\" (UID: \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\") " pod="calico-system/whisker-6b6cc98bdf-tmq2w" Oct 13 05:44:40.024398 kubelet[2828]: I1013 05:44:40.023974 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-calico-apiserver-certs\") pod \"calico-apiserver-5f56546f6c-x5lcg\" (UID: \"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd\") " pod="calico-apiserver/calico-apiserver-5f56546f6c-x5lcg" Oct 13 05:44:40.024398 kubelet[2828]: I1013 05:44:40.023987 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-backend-key-pair\") pod \"whisker-6b6cc98bdf-tmq2w\" (UID: \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\") " pod="calico-system/whisker-6b6cc98bdf-tmq2w" Oct 13 05:44:40.024398 kubelet[2828]: I1013 05:44:40.024004 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cf14e942-f41d-4f00-b058-07c5501bf435-goldmane-key-pair\") pod \"goldmane-54d579b49d-blgvw\" (UID: \"cf14e942-f41d-4f00-b058-07c5501bf435\") " pod="calico-system/goldmane-54d579b49d-blgvw" Oct 13 05:44:40.024398 kubelet[2828]: I1013 05:44:40.024021 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6pg\" (UniqueName: \"kubernetes.io/projected/76606c85-282e-4768-97b9-db0bcdb8b7da-kube-api-access-5d6pg\") pod \"coredns-668d6bf9bc-prlrf\" (UID: \"76606c85-282e-4768-97b9-db0bcdb8b7da\") " pod="kube-system/coredns-668d6bf9bc-prlrf" Oct 13 05:44:40.024398 kubelet[2828]: I1013 05:44:40.024035 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf14e942-f41d-4f00-b058-07c5501bf435-config\") pod \"goldmane-54d579b49d-blgvw\" (UID: \"cf14e942-f41d-4f00-b058-07c5501bf435\") " pod="calico-system/goldmane-54d579b49d-blgvw" Oct 13 05:44:40.024526 kubelet[2828]: I1013 05:44:40.024050 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2cj\" (UniqueName: \"kubernetes.io/projected/d24849a4-8350-497f-a0b6-cfff5b84fbf1-kube-api-access-xx2cj\") pod \"calico-kube-controllers-6dcd7d6f54-swntb\" (UID: \"d24849a4-8350-497f-a0b6-cfff5b84fbf1\") " pod="calico-system/calico-kube-controllers-6dcd7d6f54-swntb" Oct 13 05:44:40.024526 kubelet[2828]: I1013 05:44:40.024067 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/602336c3-e32d-4a73-9d7d-d8429c2ec34b-calico-apiserver-certs\") pod \"calico-apiserver-c6448884-mgzvk\" (UID: \"602336c3-e32d-4a73-9d7d-d8429c2ec34b\") " pod="calico-apiserver/calico-apiserver-c6448884-mgzvk" Oct 13 05:44:40.024526 kubelet[2828]: I1013 05:44:40.024084 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6wlv\" (UniqueName: \"kubernetes.io/projected/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-kube-api-access-c6wlv\") pod \"whisker-6b6cc98bdf-tmq2w\" (UID: \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\") " pod="calico-system/whisker-6b6cc98bdf-tmq2w" Oct 13 05:44:40.024696 systemd[1]: Created slice kubepods-besteffort-podcf14e942_f41d_4f00_b058_07c5501bf435.slice - libcontainer container kubepods-besteffort-podcf14e942_f41d_4f00_b058_07c5501bf435.slice. Oct 13 05:44:40.025305 kubelet[2828]: I1013 05:44:40.025244 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvfqw\" (UniqueName: \"kubernetes.io/projected/72f7196b-78fc-4617-9b3e-c6a93eee680d-kube-api-access-fvfqw\") pod \"coredns-668d6bf9bc-v8qr9\" (UID: \"72f7196b-78fc-4617-9b3e-c6a93eee680d\") " pod="kube-system/coredns-668d6bf9bc-v8qr9" Oct 13 05:44:40.025305 kubelet[2828]: I1013 05:44:40.025275 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d24849a4-8350-497f-a0b6-cfff5b84fbf1-tigera-ca-bundle\") pod \"calico-kube-controllers-6dcd7d6f54-swntb\" (UID: \"d24849a4-8350-497f-a0b6-cfff5b84fbf1\") " pod="calico-system/calico-kube-controllers-6dcd7d6f54-swntb" Oct 13 05:44:40.025305 kubelet[2828]: I1013 05:44:40.025294 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssl7m\" (UniqueName: \"kubernetes.io/projected/602336c3-e32d-4a73-9d7d-d8429c2ec34b-kube-api-access-ssl7m\") pod \"calico-apiserver-c6448884-mgzvk\" (UID: \"602336c3-e32d-4a73-9d7d-d8429c2ec34b\") " pod="calico-apiserver/calico-apiserver-c6448884-mgzvk" Oct 13 05:44:40.025305 kubelet[2828]: I1013 05:44:40.025312 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszzb\" (UniqueName: \"kubernetes.io/projected/cf14e942-f41d-4f00-b058-07c5501bf435-kube-api-access-vszzb\") pod \"goldmane-54d579b49d-blgvw\" (UID: \"cf14e942-f41d-4f00-b058-07c5501bf435\") " pod="calico-system/goldmane-54d579b49d-blgvw" Oct 13 05:44:40.032288 systemd[1]: Created slice kubepods-besteffort-pod3f1b517f_a7d8_41ac_a557_d8b6e064d2dd.slice - libcontainer container kubepods-besteffort-pod3f1b517f_a7d8_41ac_a557_d8b6e064d2dd.slice. Oct 13 05:44:40.040157 systemd[1]: Created slice kubepods-besteffort-pod92e7d7ad_68d1_42ab_914c_51196ff43384.slice - libcontainer container kubepods-besteffort-pod92e7d7ad_68d1_42ab_914c_51196ff43384.slice. Oct 13 05:44:40.045661 systemd[1]: Created slice kubepods-burstable-pod76606c85_282e_4768_97b9_db0bcdb8b7da.slice - libcontainer container kubepods-burstable-pod76606c85_282e_4768_97b9_db0bcdb8b7da.slice. Oct 13 05:44:40.299811 kubelet[2828]: E1013 05:44:40.299781 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:40.300501 containerd[1633]: time="2025-10-13T05:44:40.300239778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8qr9,Uid:72f7196b-78fc-4617-9b3e-c6a93eee680d,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:40.308199 containerd[1633]: time="2025-10-13T05:44:40.307737748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6cc98bdf-tmq2w,Uid:d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:40.317166 containerd[1633]: time="2025-10-13T05:44:40.317102723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dcd7d6f54-swntb,Uid:d24849a4-8350-497f-a0b6-cfff5b84fbf1,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:40.325803 containerd[1633]: time="2025-10-13T05:44:40.325761843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6448884-mgzvk,Uid:602336c3-e32d-4a73-9d7d-d8429c2ec34b,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:44:40.342035 containerd[1633]: time="2025-10-13T05:44:40.341994696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-x5lcg,Uid:3f1b517f-a7d8-41ac-a557-d8b6e064d2dd,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:44:40.342329 containerd[1633]: time="2025-10-13T05:44:40.342175766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-blgvw,Uid:cf14e942-f41d-4f00-b058-07c5501bf435,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:40.343854 containerd[1633]: time="2025-10-13T05:44:40.343823921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-jppm5,Uid:92e7d7ad-68d1-42ab-914c-51196ff43384,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:44:40.348425 kubelet[2828]: E1013 05:44:40.348311 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:40.350337 containerd[1633]: time="2025-10-13T05:44:40.350311543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-prlrf,Uid:76606c85-282e-4768-97b9-db0bcdb8b7da,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:40.484876 containerd[1633]: time="2025-10-13T05:44:40.484736710Z" level=error msg="Failed to destroy network for sandbox \"5d2858c137eee43f0ba4c83d97f0ec5fc0d8236ef356ebcf145b4485b7e7e35e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.486440 containerd[1633]: time="2025-10-13T05:44:40.486372220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b6cc98bdf-tmq2w,Uid:d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2858c137eee43f0ba4c83d97f0ec5fc0d8236ef356ebcf145b4485b7e7e35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.487969 containerd[1633]: time="2025-10-13T05:44:40.487824788Z" level=error msg="Failed to destroy network for sandbox \"b34f4ae3693e8a974ab83455a268a946a90f6bc2da3797bd3563f62cd504eb5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.494302 kubelet[2828]: E1013 05:44:40.494233 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2858c137eee43f0ba4c83d97f0ec5fc0d8236ef356ebcf145b4485b7e7e35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.494446 kubelet[2828]: E1013 05:44:40.494328 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2858c137eee43f0ba4c83d97f0ec5fc0d8236ef356ebcf145b4485b7e7e35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b6cc98bdf-tmq2w" Oct 13 05:44:40.494446 kubelet[2828]: E1013 05:44:40.494350 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2858c137eee43f0ba4c83d97f0ec5fc0d8236ef356ebcf145b4485b7e7e35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b6cc98bdf-tmq2w" Oct 13 05:44:40.494446 kubelet[2828]: E1013 05:44:40.494403 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b6cc98bdf-tmq2w_calico-system(d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b6cc98bdf-tmq2w_calico-system(d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d2858c137eee43f0ba4c83d97f0ec5fc0d8236ef356ebcf145b4485b7e7e35e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b6cc98bdf-tmq2w" podUID="d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3" Oct 13 05:44:40.495997 containerd[1633]: time="2025-10-13T05:44:40.495865728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8qr9,Uid:72f7196b-78fc-4617-9b3e-c6a93eee680d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34f4ae3693e8a974ab83455a268a946a90f6bc2da3797bd3563f62cd504eb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.496357 kubelet[2828]: E1013 05:44:40.496158 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34f4ae3693e8a974ab83455a268a946a90f6bc2da3797bd3563f62cd504eb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.496357 kubelet[2828]: E1013 05:44:40.496242 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34f4ae3693e8a974ab83455a268a946a90f6bc2da3797bd3563f62cd504eb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v8qr9" Oct 13 05:44:40.496357 kubelet[2828]: E1013 05:44:40.496262 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b34f4ae3693e8a974ab83455a268a946a90f6bc2da3797bd3563f62cd504eb5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v8qr9" Oct 13 05:44:40.496446 kubelet[2828]: E1013 05:44:40.496292 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v8qr9_kube-system(72f7196b-78fc-4617-9b3e-c6a93eee680d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v8qr9_kube-system(72f7196b-78fc-4617-9b3e-c6a93eee680d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b34f4ae3693e8a974ab83455a268a946a90f6bc2da3797bd3563f62cd504eb5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v8qr9" podUID="72f7196b-78fc-4617-9b3e-c6a93eee680d" Oct 13 05:44:40.509954 containerd[1633]: time="2025-10-13T05:44:40.509706708Z" level=error msg="Failed to destroy network for sandbox \"e6e728320fc1f0423364656588234a94a5bfec3a29a05af37b4285e6b374da5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.511033 containerd[1633]: time="2025-10-13T05:44:40.510986763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-x5lcg,Uid:3f1b517f-a7d8-41ac-a557-d8b6e064d2dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e728320fc1f0423364656588234a94a5bfec3a29a05af37b4285e6b374da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.511195 kubelet[2828]: E1013 05:44:40.511153 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e728320fc1f0423364656588234a94a5bfec3a29a05af37b4285e6b374da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.511246 kubelet[2828]: E1013 05:44:40.511200 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e728320fc1f0423364656588234a94a5bfec3a29a05af37b4285e6b374da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f56546f6c-x5lcg" Oct 13 05:44:40.511246 kubelet[2828]: E1013 05:44:40.511217 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e728320fc1f0423364656588234a94a5bfec3a29a05af37b4285e6b374da5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f56546f6c-x5lcg" Oct 13 05:44:40.511308 kubelet[2828]: E1013 05:44:40.511244 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f56546f6c-x5lcg_calico-apiserver(3f1b517f-a7d8-41ac-a557-d8b6e064d2dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f56546f6c-x5lcg_calico-apiserver(3f1b517f-a7d8-41ac-a557-d8b6e064d2dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6e728320fc1f0423364656588234a94a5bfec3a29a05af37b4285e6b374da5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f56546f6c-x5lcg" podUID="3f1b517f-a7d8-41ac-a557-d8b6e064d2dd" Oct 13 05:44:40.521398 containerd[1633]: time="2025-10-13T05:44:40.521348560Z" level=error msg="Failed to destroy network for sandbox \"d056d864443f3d462717819a5db0037788f02560836632b1654a0a7c4d3e9d6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.524377 containerd[1633]: time="2025-10-13T05:44:40.524335509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6448884-mgzvk,Uid:602336c3-e32d-4a73-9d7d-d8429c2ec34b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d056d864443f3d462717819a5db0037788f02560836632b1654a0a7c4d3e9d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.524719 kubelet[2828]: E1013 05:44:40.524658 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d056d864443f3d462717819a5db0037788f02560836632b1654a0a7c4d3e9d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.524854 kubelet[2828]: E1013 05:44:40.524739 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d056d864443f3d462717819a5db0037788f02560836632b1654a0a7c4d3e9d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6448884-mgzvk" Oct 13 05:44:40.524854 kubelet[2828]: E1013 05:44:40.524757 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d056d864443f3d462717819a5db0037788f02560836632b1654a0a7c4d3e9d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6448884-mgzvk" Oct 13 05:44:40.524854 kubelet[2828]: E1013 05:44:40.524797 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6448884-mgzvk_calico-apiserver(602336c3-e32d-4a73-9d7d-d8429c2ec34b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6448884-mgzvk_calico-apiserver(602336c3-e32d-4a73-9d7d-d8429c2ec34b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d056d864443f3d462717819a5db0037788f02560836632b1654a0a7c4d3e9d6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6448884-mgzvk" podUID="602336c3-e32d-4a73-9d7d-d8429c2ec34b" Oct 13 05:44:40.531600 containerd[1633]: time="2025-10-13T05:44:40.531488420Z" level=error msg="Failed to destroy network for sandbox \"99037b28479a549e13cb3b01ae72d54566240345ccd8e0a65cbbf667fd3553f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.532790 containerd[1633]: time="2025-10-13T05:44:40.532750932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dcd7d6f54-swntb,Uid:d24849a4-8350-497f-a0b6-cfff5b84fbf1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99037b28479a549e13cb3b01ae72d54566240345ccd8e0a65cbbf667fd3553f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.533069 kubelet[2828]: E1013 05:44:40.532999 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99037b28479a549e13cb3b01ae72d54566240345ccd8e0a65cbbf667fd3553f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.533133 kubelet[2828]: E1013 05:44:40.533088 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99037b28479a549e13cb3b01ae72d54566240345ccd8e0a65cbbf667fd3553f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dcd7d6f54-swntb" Oct 13 05:44:40.533641 kubelet[2828]: E1013 05:44:40.533157 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99037b28479a549e13cb3b01ae72d54566240345ccd8e0a65cbbf667fd3553f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dcd7d6f54-swntb" Oct 13 05:44:40.533641 kubelet[2828]: E1013 05:44:40.533249 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dcd7d6f54-swntb_calico-system(d24849a4-8350-497f-a0b6-cfff5b84fbf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dcd7d6f54-swntb_calico-system(d24849a4-8350-497f-a0b6-cfff5b84fbf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99037b28479a549e13cb3b01ae72d54566240345ccd8e0a65cbbf667fd3553f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dcd7d6f54-swntb" podUID="d24849a4-8350-497f-a0b6-cfff5b84fbf1" Oct 13 05:44:40.551215 containerd[1633]: time="2025-10-13T05:44:40.551053391Z" level=error msg="Failed to destroy network for sandbox \"9324549c6bed968823708921f191e0f585b147f6dbb7031144e034777f157664\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.553772 containerd[1633]: time="2025-10-13T05:44:40.553716962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-blgvw,Uid:cf14e942-f41d-4f00-b058-07c5501bf435,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9324549c6bed968823708921f191e0f585b147f6dbb7031144e034777f157664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.554256 kubelet[2828]: E1013 05:44:40.554162 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9324549c6bed968823708921f191e0f585b147f6dbb7031144e034777f157664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.554416 containerd[1633]: time="2025-10-13T05:44:40.554339000Z" level=error msg="Failed to destroy network for sandbox \"c27e8330a8dc9b1a5d12fa40b50966595f2bf797445eedcf4fe659a992acc4dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.554579 kubelet[2828]: E1013 05:44:40.554390 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9324549c6bed968823708921f191e0f585b147f6dbb7031144e034777f157664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-blgvw" Oct 13 05:44:40.554579 kubelet[2828]: E1013 05:44:40.554513 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9324549c6bed968823708921f191e0f585b147f6dbb7031144e034777f157664\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-blgvw" Oct 13 05:44:40.554815 kubelet[2828]: E1013 05:44:40.554783 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-blgvw_calico-system(cf14e942-f41d-4f00-b058-07c5501bf435)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-blgvw_calico-system(cf14e942-f41d-4f00-b058-07c5501bf435)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9324549c6bed968823708921f191e0f585b147f6dbb7031144e034777f157664\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-blgvw" podUID="cf14e942-f41d-4f00-b058-07c5501bf435" Oct 13 05:44:40.555729 containerd[1633]: time="2025-10-13T05:44:40.555666182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-jppm5,Uid:92e7d7ad-68d1-42ab-914c-51196ff43384,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27e8330a8dc9b1a5d12fa40b50966595f2bf797445eedcf4fe659a992acc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.555906 kubelet[2828]: E1013 05:44:40.555814 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27e8330a8dc9b1a5d12fa40b50966595f2bf797445eedcf4fe659a992acc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.555906 kubelet[2828]: E1013 05:44:40.555841 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27e8330a8dc9b1a5d12fa40b50966595f2bf797445eedcf4fe659a992acc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f56546f6c-jppm5" Oct 13 05:44:40.555906 kubelet[2828]: E1013 05:44:40.555854 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27e8330a8dc9b1a5d12fa40b50966595f2bf797445eedcf4fe659a992acc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f56546f6c-jppm5" Oct 13 05:44:40.556027 kubelet[2828]: E1013 05:44:40.555887 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f56546f6c-jppm5_calico-apiserver(92e7d7ad-68d1-42ab-914c-51196ff43384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f56546f6c-jppm5_calico-apiserver(92e7d7ad-68d1-42ab-914c-51196ff43384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c27e8330a8dc9b1a5d12fa40b50966595f2bf797445eedcf4fe659a992acc4dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f56546f6c-jppm5" podUID="92e7d7ad-68d1-42ab-914c-51196ff43384" Oct 13 05:44:40.566527 containerd[1633]: time="2025-10-13T05:44:40.566477985Z" level=error msg="Failed to destroy network for sandbox \"72628fce762447b48ab158f25d5be8ae28fe6475a026dba406b1798de779bded\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.567673 containerd[1633]: time="2025-10-13T05:44:40.567634587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-prlrf,Uid:76606c85-282e-4768-97b9-db0bcdb8b7da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72628fce762447b48ab158f25d5be8ae28fe6475a026dba406b1798de779bded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.567832 kubelet[2828]: E1013 05:44:40.567790 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72628fce762447b48ab158f25d5be8ae28fe6475a026dba406b1798de779bded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.567832 kubelet[2828]: E1013 05:44:40.567825 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72628fce762447b48ab158f25d5be8ae28fe6475a026dba406b1798de779bded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-prlrf" Oct 13 05:44:40.567905 kubelet[2828]: E1013 05:44:40.567844 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72628fce762447b48ab158f25d5be8ae28fe6475a026dba406b1798de779bded\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-prlrf" Oct 13 05:44:40.567905 kubelet[2828]: E1013 05:44:40.567881 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-prlrf_kube-system(76606c85-282e-4768-97b9-db0bcdb8b7da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-prlrf_kube-system(76606c85-282e-4768-97b9-db0bcdb8b7da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72628fce762447b48ab158f25d5be8ae28fe6475a026dba406b1798de779bded\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-prlrf" podUID="76606c85-282e-4768-97b9-db0bcdb8b7da" Oct 13 05:44:40.623628 systemd[1]: Created slice kubepods-besteffort-pod02edaea7_f337_4fcf_9037_ac41cfab2259.slice - libcontainer container kubepods-besteffort-pod02edaea7_f337_4fcf_9037_ac41cfab2259.slice. Oct 13 05:44:40.625727 containerd[1633]: time="2025-10-13T05:44:40.625687169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj54f,Uid:02edaea7-f337-4fcf-9037-ac41cfab2259,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:40.678206 containerd[1633]: time="2025-10-13T05:44:40.678154293Z" level=error msg="Failed to destroy network for sandbox \"7da97b68f64483b22ac84d99e13823950a83ab19169906e051cdab6cbda952b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.679449 containerd[1633]: time="2025-10-13T05:44:40.679419248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj54f,Uid:02edaea7-f337-4fcf-9037-ac41cfab2259,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7da97b68f64483b22ac84d99e13823950a83ab19169906e051cdab6cbda952b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.679689 kubelet[2828]: E1013 05:44:40.679646 2828 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7da97b68f64483b22ac84d99e13823950a83ab19169906e051cdab6cbda952b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:44:40.679745 kubelet[2828]: E1013 05:44:40.679714 2828 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7da97b68f64483b22ac84d99e13823950a83ab19169906e051cdab6cbda952b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vj54f" Oct 13 05:44:40.679745 kubelet[2828]: E1013 05:44:40.679736 2828 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7da97b68f64483b22ac84d99e13823950a83ab19169906e051cdab6cbda952b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vj54f" Oct 13 05:44:40.679811 kubelet[2828]: E1013 05:44:40.679782 2828 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vj54f_calico-system(02edaea7-f337-4fcf-9037-ac41cfab2259)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vj54f_calico-system(02edaea7-f337-4fcf-9037-ac41cfab2259)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7da97b68f64483b22ac84d99e13823950a83ab19169906e051cdab6cbda952b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vj54f" podUID="02edaea7-f337-4fcf-9037-ac41cfab2259" Oct 13 05:44:40.704130 containerd[1633]: time="2025-10-13T05:44:40.704063204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:44:41.135936 systemd[1]: run-netns-cni\x2dc0c9d11f\x2dd2ac\x2d7735\x2d39eb\x2da5dfeabe2544.mount: Deactivated successfully. Oct 13 05:44:49.213448 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:37934.service - OpenSSH per-connection server daemon (10.0.0.1:37934). Oct 13 05:44:49.300054 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 37934 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:44:49.302030 sshd-session[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:49.308364 systemd-logind[1609]: New session 10 of user core. Oct 13 05:44:49.315268 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:44:49.725062 sshd[3966]: Connection closed by 10.0.0.1 port 37934 Oct 13 05:44:49.725397 sshd-session[3963]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:49.729621 systemd-logind[1609]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:44:49.731079 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:37934.service: Deactivated successfully. Oct 13 05:44:49.734147 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:44:49.736562 systemd-logind[1609]: Removed session 10. Oct 13 05:44:49.796872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474643058.mount: Deactivated successfully. Oct 13 05:44:49.919794 containerd[1633]: time="2025-10-13T05:44:49.919740314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:49.921359 containerd[1633]: time="2025-10-13T05:44:49.921322664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:44:49.922584 containerd[1633]: time="2025-10-13T05:44:49.922536532Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:49.924232 containerd[1633]: time="2025-10-13T05:44:49.924199604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:49.924777 containerd[1633]: time="2025-10-13T05:44:49.924733206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.220628534s" Oct 13 05:44:49.924777 containerd[1633]: time="2025-10-13T05:44:49.924775796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:44:49.933143 containerd[1633]: time="2025-10-13T05:44:49.933064013Z" level=info msg="CreateContainer within sandbox \"eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:44:49.949504 containerd[1633]: time="2025-10-13T05:44:49.949457815Z" level=info msg="Container 9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:49.958770 containerd[1633]: time="2025-10-13T05:44:49.958734718Z" level=info msg="CreateContainer within sandbox \"eb2770f99ee0e0d88acab567b3e51487b6d7798ad5a3fd2b4c7cf72f54ae9483\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628\"" Oct 13 05:44:49.959249 containerd[1633]: time="2025-10-13T05:44:49.959220319Z" level=info msg="StartContainer for \"9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628\"" Oct 13 05:44:49.960649 containerd[1633]: time="2025-10-13T05:44:49.960624626Z" level=info msg="connecting to shim 9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628" address="unix:///run/containerd/s/455bcce514bd53b3d4cb135eab3643e32d0f2a7331cfec10a9e3f1ecaaf51b21" protocol=ttrpc version=3 Oct 13 05:44:49.985072 systemd[1]: Started cri-containerd-9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628.scope - libcontainer container 9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628. Oct 13 05:44:50.036154 containerd[1633]: time="2025-10-13T05:44:50.036107688Z" level=info msg="StartContainer for \"9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628\" returns successfully" Oct 13 05:44:50.115849 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:44:50.116025 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:44:50.287813 kubelet[2828]: I1013 05:44:50.287766 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6wlv\" (UniqueName: \"kubernetes.io/projected/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-kube-api-access-c6wlv\") pod \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\" (UID: \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\") " Oct 13 05:44:50.287813 kubelet[2828]: I1013 05:44:50.287816 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-ca-bundle\") pod \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\" (UID: \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\") " Oct 13 05:44:50.288372 kubelet[2828]: I1013 05:44:50.287844 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-backend-key-pair\") pod \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\" (UID: \"d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3\") " Oct 13 05:44:50.288912 kubelet[2828]: I1013 05:44:50.288849 2828 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3" (UID: "d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:44:50.293122 kubelet[2828]: I1013 05:44:50.293048 2828 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-kube-api-access-c6wlv" (OuterVolumeSpecName: "kube-api-access-c6wlv") pod "d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3" (UID: "d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3"). InnerVolumeSpecName "kube-api-access-c6wlv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:44:50.293332 kubelet[2828]: I1013 05:44:50.293296 2828 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3" (UID: "d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:44:50.389057 kubelet[2828]: I1013 05:44:50.389005 2828 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:44:50.389057 kubelet[2828]: I1013 05:44:50.389037 2828 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:44:50.389057 kubelet[2828]: I1013 05:44:50.389046 2828 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c6wlv\" (UniqueName: \"kubernetes.io/projected/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3-kube-api-access-c6wlv\") on node \"localhost\" DevicePath \"\"" Oct 13 05:44:50.625777 systemd[1]: Removed slice kubepods-besteffort-podd8d8e227_5e2a_4e22_a4b4_3c2d6d43d4a3.slice - libcontainer container kubepods-besteffort-podd8d8e227_5e2a_4e22_a4b4_3c2d6d43d4a3.slice. Oct 13 05:44:50.756947 kubelet[2828]: I1013 05:44:50.756442 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q42wc" podStartSLOduration=2.055420907 podStartE2EDuration="22.756415958s" podCreationTimestamp="2025-10-13 05:44:28 +0000 UTC" firstStartedPulling="2025-10-13 05:44:29.224501447 +0000 UTC m=+20.690793708" lastFinishedPulling="2025-10-13 05:44:49.925496498 +0000 UTC m=+41.391788759" observedRunningTime="2025-10-13 05:44:50.755609695 +0000 UTC m=+42.221901976" watchObservedRunningTime="2025-10-13 05:44:50.756415958 +0000 UTC m=+42.222708219" Oct 13 05:44:50.802902 systemd[1]: var-lib-kubelet-pods-d8d8e227\x2d5e2a\x2d4e22\x2da4b4\x2d3c2d6d43d4a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc6wlv.mount: Deactivated successfully. Oct 13 05:44:50.803047 systemd[1]: var-lib-kubelet-pods-d8d8e227\x2d5e2a\x2d4e22\x2da4b4\x2d3c2d6d43d4a3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:44:50.818453 systemd[1]: Created slice kubepods-besteffort-pod3db5727e_a3a5_4321_bc1a_bb11b0656e6e.slice - libcontainer container kubepods-besteffort-pod3db5727e_a3a5_4321_bc1a_bb11b0656e6e.slice. Oct 13 05:44:50.892270 kubelet[2828]: I1013 05:44:50.892133 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3db5727e-a3a5-4321-bc1a-bb11b0656e6e-whisker-backend-key-pair\") pod \"whisker-6bb575568-6qcvz\" (UID: \"3db5727e-a3a5-4321-bc1a-bb11b0656e6e\") " pod="calico-system/whisker-6bb575568-6qcvz" Oct 13 05:44:50.892270 kubelet[2828]: I1013 05:44:50.892178 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmhz\" (UniqueName: \"kubernetes.io/projected/3db5727e-a3a5-4321-bc1a-bb11b0656e6e-kube-api-access-swmhz\") pod \"whisker-6bb575568-6qcvz\" (UID: \"3db5727e-a3a5-4321-bc1a-bb11b0656e6e\") " pod="calico-system/whisker-6bb575568-6qcvz" Oct 13 05:44:50.892270 kubelet[2828]: I1013 05:44:50.892201 2828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3db5727e-a3a5-4321-bc1a-bb11b0656e6e-whisker-ca-bundle\") pod \"whisker-6bb575568-6qcvz\" (UID: \"3db5727e-a3a5-4321-bc1a-bb11b0656e6e\") " pod="calico-system/whisker-6bb575568-6qcvz" Oct 13 05:44:50.940134 containerd[1633]: time="2025-10-13T05:44:50.940087928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628\" id:\"26277477b9ebddaa88fff5799811b25a549b053b56e9524fd90037915e023106\" pid:4063 exit_status:1 exited_at:{seconds:1760334290 nanos:923486828}" Oct 13 05:44:51.125207 containerd[1633]: time="2025-10-13T05:44:51.125149139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb575568-6qcvz,Uid:3db5727e-a3a5-4321-bc1a-bb11b0656e6e,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:51.618805 containerd[1633]: time="2025-10-13T05:44:51.618752988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-jppm5,Uid:92e7d7ad-68d1-42ab-914c-51196ff43384,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:44:51.619077 containerd[1633]: time="2025-10-13T05:44:51.618845832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6448884-mgzvk,Uid:602336c3-e32d-4a73-9d7d-d8429c2ec34b,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:44:51.619077 containerd[1633]: time="2025-10-13T05:44:51.618945239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dcd7d6f54-swntb,Uid:d24849a4-8350-497f-a0b6-cfff5b84fbf1,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:51.751203 systemd-networkd[1533]: cali3c10ff6d6c4: Link UP Oct 13 05:44:51.751449 systemd-networkd[1533]: cali3c10ff6d6c4: Gained carrier Oct 13 05:44:51.766029 containerd[1633]: 2025-10-13 05:44:51.459 [INFO][4153] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:44:51.766029 containerd[1633]: 2025-10-13 05:44:51.486 [INFO][4153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6bb575568--6qcvz-eth0 whisker-6bb575568- calico-system 3db5727e-a3a5-4321-bc1a-bb11b0656e6e 1009 0 2025-10-13 05:44:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bb575568 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6bb575568-6qcvz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3c10ff6d6c4 [] [] }} ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-" Oct 13 05:44:51.766029 containerd[1633]: 2025-10-13 05:44:51.486 [INFO][4153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-eth0" Oct 13 05:44:51.766029 containerd[1633]: 2025-10-13 05:44:51.599 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" HandleID="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Workload="localhost-k8s-whisker--6bb575568--6qcvz-eth0" Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.599 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" HandleID="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Workload="localhost-k8s-whisker--6bb575568--6qcvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005036d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6bb575568-6qcvz", "timestamp":"2025-10-13 05:44:51.599097907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.599 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.600 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.600 [INFO][4200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.611 [INFO][4200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" host="localhost" Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.616 [INFO][4200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.620 [INFO][4200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.622 [INFO][4200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.623 [INFO][4200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:51.766255 containerd[1633]: 2025-10-13 05:44:51.623 [INFO][4200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" host="localhost" Oct 13 05:44:51.766536 containerd[1633]: 2025-10-13 05:44:51.624 [INFO][4200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1 Oct 13 05:44:51.766536 containerd[1633]: 2025-10-13 05:44:51.727 [INFO][4200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" host="localhost" Oct 13 05:44:51.766536 containerd[1633]: 2025-10-13 05:44:51.734 [INFO][4200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" host="localhost" Oct 13 05:44:51.766536 containerd[1633]: 2025-10-13 05:44:51.735 [INFO][4200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" host="localhost" Oct 13 05:44:51.766536 containerd[1633]: 2025-10-13 05:44:51.735 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:51.766536 containerd[1633]: 2025-10-13 05:44:51.735 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" HandleID="k8s-pod-network.9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Workload="localhost-k8s-whisker--6bb575568--6qcvz-eth0" Oct 13 05:44:51.766661 containerd[1633]: 2025-10-13 05:44:51.739 [INFO][4153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bb575568--6qcvz-eth0", GenerateName:"whisker-6bb575568-", Namespace:"calico-system", SelfLink:"", UID:"3db5727e-a3a5-4321-bc1a-bb11b0656e6e", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb575568", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6bb575568-6qcvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3c10ff6d6c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:51.766661 containerd[1633]: 2025-10-13 05:44:51.740 [INFO][4153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-eth0" Oct 13 05:44:51.766740 containerd[1633]: 2025-10-13 05:44:51.740 [INFO][4153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c10ff6d6c4 ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-eth0" Oct 13 05:44:51.766740 containerd[1633]: 2025-10-13 05:44:51.750 [INFO][4153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-eth0" Oct 13 05:44:51.766784 containerd[1633]: 2025-10-13 05:44:51.751 [INFO][4153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bb575568--6qcvz-eth0", GenerateName:"whisker-6bb575568-", Namespace:"calico-system", SelfLink:"", UID:"3db5727e-a3a5-4321-bc1a-bb11b0656e6e", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb575568", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1", Pod:"whisker-6bb575568-6qcvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3c10ff6d6c4", MAC:"4e:ff:83:83:39:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:51.766837 containerd[1633]: 2025-10-13 05:44:51.759 [INFO][4153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" Namespace="calico-system" Pod="whisker-6bb575568-6qcvz" WorkloadEndpoint="localhost-k8s-whisker--6bb575568--6qcvz-eth0" Oct 13 05:44:51.894472 containerd[1633]: time="2025-10-13T05:44:51.893642486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628\" id:\"0f1b0aafa224b76c1ad894c23190bd7f6988e9e12b3de7ab8ec0189d94bac80e\" pid:4245 exit_status:1 exited_at:{seconds:1760334291 nanos:891889957}" Oct 13 05:44:51.938895 containerd[1633]: time="2025-10-13T05:44:51.938854035Z" level=info msg="connecting to shim 9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1" address="unix:///run/containerd/s/c896b54e71ba03b37fead506c91cee42d214b8a4e35b02daa0efc9d4cbd5cb3d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:51.943304 systemd-networkd[1533]: calia42c876a2e9: Link UP Oct 13 05:44:51.944893 systemd-networkd[1533]: calia42c876a2e9: Gained carrier Oct 13 05:44:51.963649 containerd[1633]: 2025-10-13 05:44:51.826 [INFO][4265] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0 calico-apiserver-5f56546f6c- calico-apiserver 92e7d7ad-68d1-42ab-914c-51196ff43384 899 0 2025-10-13 05:44:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f56546f6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f56546f6c-jppm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia42c876a2e9 [] [] }} ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-" Oct 13 05:44:51.963649 containerd[1633]: 2025-10-13 05:44:51.826 [INFO][4265] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" Oct 13 05:44:51.963649 containerd[1633]: 2025-10-13 05:44:51.881 [INFO][4303] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" HandleID="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Workload="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.881 [INFO][4303] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" HandleID="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Workload="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f56546f6c-jppm5", "timestamp":"2025-10-13 05:44:51.881064869 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.881 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.881 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.881 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.896 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" host="localhost" Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.906 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.911 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.913 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.915 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:51.964183 containerd[1633]: 2025-10-13 05:44:51.915 [INFO][4303] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" host="localhost" Oct 13 05:44:51.964419 containerd[1633]: 2025-10-13 05:44:51.916 [INFO][4303] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17 Oct 13 05:44:51.964419 containerd[1633]: 2025-10-13 05:44:51.919 [INFO][4303] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" host="localhost" Oct 13 05:44:51.964419 containerd[1633]: 2025-10-13 05:44:51.923 [INFO][4303] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" host="localhost" Oct 13 05:44:51.964419 containerd[1633]: 2025-10-13 05:44:51.924 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" host="localhost" Oct 13 05:44:51.964419 containerd[1633]: 2025-10-13 05:44:51.924 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:51.964419 containerd[1633]: 2025-10-13 05:44:51.924 [INFO][4303] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" HandleID="k8s-pod-network.7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Workload="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" Oct 13 05:44:51.964559 containerd[1633]: 2025-10-13 05:44:51.929 [INFO][4265] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0", GenerateName:"calico-apiserver-5f56546f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"92e7d7ad-68d1-42ab-914c-51196ff43384", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f56546f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f56546f6c-jppm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia42c876a2e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:51.964612 containerd[1633]: 2025-10-13 05:44:51.930 [INFO][4265] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" Oct 13 05:44:51.964612 containerd[1633]: 2025-10-13 05:44:51.931 [INFO][4265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia42c876a2e9 ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" Oct 13 05:44:51.964612 containerd[1633]: 2025-10-13 05:44:51.946 [INFO][4265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" Oct 13 05:44:51.964679 containerd[1633]: 2025-10-13 05:44:51.946 [INFO][4265] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0", GenerateName:"calico-apiserver-5f56546f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"92e7d7ad-68d1-42ab-914c-51196ff43384", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f56546f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17", Pod:"calico-apiserver-5f56546f6c-jppm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia42c876a2e9", MAC:"b2:14:3b:99:cf:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:51.964729 containerd[1633]: 2025-10-13 05:44:51.960 [INFO][4265] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-jppm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--jppm5-eth0" Oct 13 05:44:51.975698 systemd-networkd[1533]: vxlan.calico: Link UP Oct 13 05:44:51.975712 systemd-networkd[1533]: vxlan.calico: Gained carrier Oct 13 05:44:51.976215 systemd[1]: Started cri-containerd-9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1.scope - libcontainer container 9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1. Oct 13 05:44:52.004102 containerd[1633]: time="2025-10-13T05:44:52.004038973Z" level=info msg="connecting to shim 7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17" address="unix:///run/containerd/s/5e101674586c8a17338a03192bad61adb4a492f89d5a5fca39d2f647096b23ee" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:52.014016 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:52.048986 systemd-networkd[1533]: cali8a64958b20a: Link UP Oct 13 05:44:52.049794 systemd-networkd[1533]: cali8a64958b20a: Gained carrier Oct 13 05:44:52.060125 systemd[1]: Started cri-containerd-7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17.scope - libcontainer container 7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17. Oct 13 05:44:52.068246 containerd[1633]: 2025-10-13 05:44:51.839 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0 calico-kube-controllers-6dcd7d6f54- calico-system d24849a4-8350-497f-a0b6-cfff5b84fbf1 894 0 2025-10-13 05:44:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dcd7d6f54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6dcd7d6f54-swntb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8a64958b20a [] [] }} ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-" Oct 13 05:44:52.068246 containerd[1633]: 2025-10-13 05:44:51.839 [INFO][4264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" Oct 13 05:44:52.068246 containerd[1633]: 2025-10-13 05:44:51.894 [INFO][4311] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" HandleID="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Workload="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:51.895 [INFO][4311] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" HandleID="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Workload="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6dcd7d6f54-swntb", "timestamp":"2025-10-13 05:44:51.894300001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:51.895 [INFO][4311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:51.924 [INFO][4311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:51.924 [INFO][4311] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:51.992 [INFO][4311] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" host="localhost" Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:52.011 [INFO][4311] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:52.018 [INFO][4311] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:52.020 [INFO][4311] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:52.023 [INFO][4311] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:52.068514 containerd[1633]: 2025-10-13 05:44:52.023 [INFO][4311] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" host="localhost" Oct 13 05:44:52.070159 containerd[1633]: 2025-10-13 05:44:52.024 [INFO][4311] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54 Oct 13 05:44:52.070159 containerd[1633]: 2025-10-13 05:44:52.029 [INFO][4311] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" host="localhost" Oct 13 05:44:52.070159 containerd[1633]: 2025-10-13 05:44:52.037 [INFO][4311] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" host="localhost" Oct 13 05:44:52.070159 containerd[1633]: 2025-10-13 05:44:52.037 [INFO][4311] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" host="localhost" Oct 13 05:44:52.070159 containerd[1633]: 2025-10-13 05:44:52.037 [INFO][4311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:52.070159 containerd[1633]: 2025-10-13 05:44:52.037 [INFO][4311] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" HandleID="k8s-pod-network.b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Workload="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" Oct 13 05:44:52.070287 containerd[1633]: 2025-10-13 05:44:52.045 [INFO][4264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0", GenerateName:"calico-kube-controllers-6dcd7d6f54-", Namespace:"calico-system", SelfLink:"", UID:"d24849a4-8350-497f-a0b6-cfff5b84fbf1", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dcd7d6f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6dcd7d6f54-swntb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a64958b20a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:52.070350 containerd[1633]: 2025-10-13 05:44:52.045 [INFO][4264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" Oct 13 05:44:52.070350 containerd[1633]: 2025-10-13 05:44:52.045 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a64958b20a ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" Oct 13 05:44:52.070350 containerd[1633]: 2025-10-13 05:44:52.050 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" Oct 13 05:44:52.070410 containerd[1633]: 2025-10-13 05:44:52.050 [INFO][4264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0", GenerateName:"calico-kube-controllers-6dcd7d6f54-", Namespace:"calico-system", SelfLink:"", UID:"d24849a4-8350-497f-a0b6-cfff5b84fbf1", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dcd7d6f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54", Pod:"calico-kube-controllers-6dcd7d6f54-swntb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a64958b20a", MAC:"6e:a2:51:20:28:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:52.070458 containerd[1633]: 2025-10-13 05:44:52.064 [INFO][4264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" Namespace="calico-system" Pod="calico-kube-controllers-6dcd7d6f54-swntb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dcd7d6f54--swntb-eth0" Oct 13 05:44:52.082968 containerd[1633]: time="2025-10-13T05:44:52.082141992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb575568-6qcvz,Uid:3db5727e-a3a5-4321-bc1a-bb11b0656e6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1\"" Oct 13 05:44:52.087312 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:52.089751 containerd[1633]: time="2025-10-13T05:44:52.089723622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:44:52.096939 containerd[1633]: time="2025-10-13T05:44:52.096339148Z" level=info msg="connecting to shim b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54" address="unix:///run/containerd/s/c391bc37966971f679b8d6a2b0ddd4b2da089e47e237561eeb8343b959afbb7e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:52.133114 systemd[1]: Started cri-containerd-b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54.scope - libcontainer container b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54. Oct 13 05:44:52.140425 containerd[1633]: time="2025-10-13T05:44:52.140372684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-jppm5,Uid:92e7d7ad-68d1-42ab-914c-51196ff43384,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17\"" Oct 13 05:44:52.152715 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:52.159711 systemd-networkd[1533]: calib210666cf4f: Link UP Oct 13 05:44:52.161567 systemd-networkd[1533]: calib210666cf4f: Gained carrier Oct 13 05:44:52.179532 containerd[1633]: 2025-10-13 05:44:51.853 [INFO][4266] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0 calico-apiserver-c6448884- calico-apiserver 602336c3-e32d-4a73-9d7d-d8429c2ec34b 897 0 2025-10-13 05:44:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6448884 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c6448884-mgzvk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib210666cf4f [] [] }} ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-" Oct 13 05:44:52.179532 containerd[1633]: 2025-10-13 05:44:51.853 [INFO][4266] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" Oct 13 05:44:52.179532 containerd[1633]: 2025-10-13 05:44:51.906 [INFO][4319] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" HandleID="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Workload="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:51.907 [INFO][4319] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" HandleID="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Workload="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000590a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c6448884-mgzvk", "timestamp":"2025-10-13 05:44:51.906564319 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:51.907 [INFO][4319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.038 [INFO][4319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.039 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.092 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" host="localhost" Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.107 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.123 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.125 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.131 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:52.179790 containerd[1633]: 2025-10-13 05:44:52.131 [INFO][4319] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" host="localhost" Oct 13 05:44:52.180089 containerd[1633]: 2025-10-13 05:44:52.137 [INFO][4319] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d Oct 13 05:44:52.180089 containerd[1633]: 2025-10-13 05:44:52.142 [INFO][4319] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" host="localhost" Oct 13 05:44:52.180089 containerd[1633]: 2025-10-13 05:44:52.149 [INFO][4319] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" host="localhost" Oct 13 05:44:52.180089 containerd[1633]: 2025-10-13 05:44:52.149 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" host="localhost" Oct 13 05:44:52.180089 containerd[1633]: 2025-10-13 05:44:52.149 [INFO][4319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:52.180089 containerd[1633]: 2025-10-13 05:44:52.149 [INFO][4319] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" HandleID="k8s-pod-network.7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Workload="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" Oct 13 05:44:52.180204 containerd[1633]: 2025-10-13 05:44:52.155 [INFO][4266] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0", GenerateName:"calico-apiserver-c6448884-", Namespace:"calico-apiserver", SelfLink:"", UID:"602336c3-e32d-4a73-9d7d-d8429c2ec34b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6448884", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c6448884-mgzvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib210666cf4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:52.180261 containerd[1633]: 2025-10-13 05:44:52.155 [INFO][4266] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" Oct 13 05:44:52.180261 containerd[1633]: 2025-10-13 05:44:52.155 [INFO][4266] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib210666cf4f ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" Oct 13 05:44:52.180261 containerd[1633]: 2025-10-13 05:44:52.162 [INFO][4266] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" Oct 13 05:44:52.180331 containerd[1633]: 2025-10-13 05:44:52.163 [INFO][4266] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0", GenerateName:"calico-apiserver-c6448884-", Namespace:"calico-apiserver", SelfLink:"", UID:"602336c3-e32d-4a73-9d7d-d8429c2ec34b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6448884", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d", Pod:"calico-apiserver-c6448884-mgzvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib210666cf4f", MAC:"da:ae:a6:30:1d:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:52.180383 containerd[1633]: 2025-10-13 05:44:52.174 [INFO][4266] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" Namespace="calico-apiserver" Pod="calico-apiserver-c6448884-mgzvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6448884--mgzvk-eth0" Oct 13 05:44:52.194686 containerd[1633]: time="2025-10-13T05:44:52.194644245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dcd7d6f54-swntb,Uid:d24849a4-8350-497f-a0b6-cfff5b84fbf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54\"" Oct 13 05:44:52.206940 containerd[1633]: time="2025-10-13T05:44:52.206872706Z" level=info msg="connecting to shim 7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d" address="unix:///run/containerd/s/22e69fa9bd9d9188df4e01e26f3e36be286fe7ae92b0daeb2799be351c4dbb5d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:52.236078 systemd[1]: Started cri-containerd-7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d.scope - libcontainer container 7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d. Oct 13 05:44:52.251014 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:52.284491 containerd[1633]: time="2025-10-13T05:44:52.284443768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6448884-mgzvk,Uid:602336c3-e32d-4a73-9d7d-d8429c2ec34b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d\"" Oct 13 05:44:52.618273 kubelet[2828]: E1013 05:44:52.618231 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:52.618818 containerd[1633]: time="2025-10-13T05:44:52.618685418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-prlrf,Uid:76606c85-282e-4768-97b9-db0bcdb8b7da,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:52.622085 kubelet[2828]: I1013 05:44:52.622057 2828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3" path="/var/lib/kubelet/pods/d8d8e227-5e2a-4e22-a4b4-3c2d6d43d4a3/volumes" Oct 13 05:44:52.715804 systemd-networkd[1533]: califc6abf3937c: Link UP Oct 13 05:44:52.716419 systemd-networkd[1533]: califc6abf3937c: Gained carrier Oct 13 05:44:52.728996 containerd[1633]: 2025-10-13 05:44:52.653 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--prlrf-eth0 coredns-668d6bf9bc- kube-system 76606c85-282e-4768-97b9-db0bcdb8b7da 900 0 2025-10-13 05:44:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-prlrf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc6abf3937c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-" Oct 13 05:44:52.728996 containerd[1633]: 2025-10-13 05:44:52.653 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" Oct 13 05:44:52.728996 containerd[1633]: 2025-10-13 05:44:52.679 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" HandleID="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Workload="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.679 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" HandleID="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Workload="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-prlrf", "timestamp":"2025-10-13 05:44:52.679715774 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.679 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.680 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.680 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.686 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" host="localhost" Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.690 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.695 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.696 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.698 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:52.729206 containerd[1633]: 2025-10-13 05:44:52.698 [INFO][4622] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" host="localhost" Oct 13 05:44:52.729430 containerd[1633]: 2025-10-13 05:44:52.699 [INFO][4622] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9 Oct 13 05:44:52.729430 containerd[1633]: 2025-10-13 05:44:52.702 [INFO][4622] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" host="localhost" Oct 13 05:44:52.729430 containerd[1633]: 2025-10-13 05:44:52.707 [INFO][4622] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" host="localhost" Oct 13 05:44:52.729430 containerd[1633]: 2025-10-13 05:44:52.707 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" host="localhost" Oct 13 05:44:52.729430 containerd[1633]: 2025-10-13 05:44:52.707 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:52.729430 containerd[1633]: 2025-10-13 05:44:52.708 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" HandleID="k8s-pod-network.f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Workload="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" Oct 13 05:44:52.729541 containerd[1633]: 2025-10-13 05:44:52.711 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--prlrf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"76606c85-282e-4768-97b9-db0bcdb8b7da", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-prlrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6abf3937c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:52.729615 containerd[1633]: 2025-10-13 05:44:52.711 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" Oct 13 05:44:52.729615 containerd[1633]: 2025-10-13 05:44:52.711 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc6abf3937c ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" Oct 13 05:44:52.729615 containerd[1633]: 2025-10-13 05:44:52.716 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" Oct 13 05:44:52.730094 containerd[1633]: 2025-10-13 05:44:52.717 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--prlrf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"76606c85-282e-4768-97b9-db0bcdb8b7da", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9", Pod:"coredns-668d6bf9bc-prlrf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6abf3937c", MAC:"12:0f:df:86:a1:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:52.730094 containerd[1633]: 2025-10-13 05:44:52.724 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-prlrf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--prlrf-eth0" Oct 13 05:44:52.753238 containerd[1633]: time="2025-10-13T05:44:52.753179166Z" level=info msg="connecting to shim f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9" address="unix:///run/containerd/s/2761140cd8ce3f39dd2ddfcfc41f92a1967c68a826ab4bfbbfbde133adc4160d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:52.787085 systemd[1]: Started cri-containerd-f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9.scope - libcontainer container f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9. Oct 13 05:44:52.812187 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:52.842412 containerd[1633]: time="2025-10-13T05:44:52.842373182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-prlrf,Uid:76606c85-282e-4768-97b9-db0bcdb8b7da,Namespace:kube-system,Attempt:0,} returns sandbox id \"f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9\"" Oct 13 05:44:52.843149 kubelet[2828]: E1013 05:44:52.843120 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:52.845556 containerd[1633]: time="2025-10-13T05:44:52.845529707Z" level=info msg="CreateContainer within sandbox \"f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:44:52.856734 containerd[1633]: time="2025-10-13T05:44:52.856677089Z" level=info msg="Container 5385464ca194f2e07880c4ceb8b55f7bca30dcf0de25e0ca22dfebf5514d74d5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:52.860569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432253506.mount: Deactivated successfully. Oct 13 05:44:52.864303 containerd[1633]: time="2025-10-13T05:44:52.864274839Z" level=info msg="CreateContainer within sandbox \"f22008959fe0970e4a2270997c272e93055c2b62cbca469f3d4a3e5e60eb60a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5385464ca194f2e07880c4ceb8b55f7bca30dcf0de25e0ca22dfebf5514d74d5\"" Oct 13 05:44:52.864882 containerd[1633]: time="2025-10-13T05:44:52.864846051Z" level=info msg="StartContainer for \"5385464ca194f2e07880c4ceb8b55f7bca30dcf0de25e0ca22dfebf5514d74d5\"" Oct 13 05:44:52.865668 containerd[1633]: time="2025-10-13T05:44:52.865643748Z" level=info msg="connecting to shim 5385464ca194f2e07880c4ceb8b55f7bca30dcf0de25e0ca22dfebf5514d74d5" address="unix:///run/containerd/s/2761140cd8ce3f39dd2ddfcfc41f92a1967c68a826ab4bfbbfbde133adc4160d" protocol=ttrpc version=3 Oct 13 05:44:52.889077 systemd[1]: Started cri-containerd-5385464ca194f2e07880c4ceb8b55f7bca30dcf0de25e0ca22dfebf5514d74d5.scope - libcontainer container 5385464ca194f2e07880c4ceb8b55f7bca30dcf0de25e0ca22dfebf5514d74d5. Oct 13 05:44:52.920364 containerd[1633]: time="2025-10-13T05:44:52.920320109Z" level=info msg="StartContainer for \"5385464ca194f2e07880c4ceb8b55f7bca30dcf0de25e0ca22dfebf5514d74d5\" returns successfully" Oct 13 05:44:53.079132 systemd-networkd[1533]: calia42c876a2e9: Gained IPv6LL Oct 13 05:44:53.079541 systemd-networkd[1533]: cali3c10ff6d6c4: Gained IPv6LL Oct 13 05:44:53.207118 systemd-networkd[1533]: cali8a64958b20a: Gained IPv6LL Oct 13 05:44:53.454633 containerd[1633]: time="2025-10-13T05:44:53.454571109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:53.455630 containerd[1633]: time="2025-10-13T05:44:53.455582438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:44:53.456853 containerd[1633]: time="2025-10-13T05:44:53.456792588Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:53.458767 containerd[1633]: time="2025-10-13T05:44:53.458691522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:53.459440 containerd[1633]: time="2025-10-13T05:44:53.459390374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.369637236s" Oct 13 05:44:53.459440 containerd[1633]: time="2025-10-13T05:44:53.459433024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:44:53.460539 containerd[1633]: time="2025-10-13T05:44:53.460489056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:44:53.461760 containerd[1633]: time="2025-10-13T05:44:53.461721038Z" level=info msg="CreateContainer within sandbox \"9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:44:53.463110 systemd-networkd[1533]: vxlan.calico: Gained IPv6LL Oct 13 05:44:53.468569 containerd[1633]: time="2025-10-13T05:44:53.468523685Z" level=info msg="Container 877bd2dcb402817d7cb7fca6770f7ad511b412b329d92c1ae1d427670fbe25ea: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:53.475647 containerd[1633]: time="2025-10-13T05:44:53.475601981Z" level=info msg="CreateContainer within sandbox \"9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"877bd2dcb402817d7cb7fca6770f7ad511b412b329d92c1ae1d427670fbe25ea\"" Oct 13 05:44:53.476087 containerd[1633]: time="2025-10-13T05:44:53.476058207Z" level=info msg="StartContainer for \"877bd2dcb402817d7cb7fca6770f7ad511b412b329d92c1ae1d427670fbe25ea\"" Oct 13 05:44:53.477039 containerd[1633]: time="2025-10-13T05:44:53.477001858Z" level=info msg="connecting to shim 877bd2dcb402817d7cb7fca6770f7ad511b412b329d92c1ae1d427670fbe25ea" address="unix:///run/containerd/s/c896b54e71ba03b37fead506c91cee42d214b8a4e35b02daa0efc9d4cbd5cb3d" protocol=ttrpc version=3 Oct 13 05:44:53.503074 systemd[1]: Started cri-containerd-877bd2dcb402817d7cb7fca6770f7ad511b412b329d92c1ae1d427670fbe25ea.scope - libcontainer container 877bd2dcb402817d7cb7fca6770f7ad511b412b329d92c1ae1d427670fbe25ea. Oct 13 05:44:53.551197 containerd[1633]: time="2025-10-13T05:44:53.551153798Z" level=info msg="StartContainer for \"877bd2dcb402817d7cb7fca6770f7ad511b412b329d92c1ae1d427670fbe25ea\" returns successfully" Oct 13 05:44:53.762851 kubelet[2828]: E1013 05:44:53.762757 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:53.772549 kubelet[2828]: I1013 05:44:53.772497 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-prlrf" podStartSLOduration=37.772478386 podStartE2EDuration="37.772478386s" podCreationTimestamp="2025-10-13 05:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:44:53.771914016 +0000 UTC m=+45.238206277" watchObservedRunningTime="2025-10-13 05:44:53.772478386 +0000 UTC m=+45.238770647" Oct 13 05:44:53.911088 systemd-networkd[1533]: calib210666cf4f: Gained IPv6LL Oct 13 05:44:53.976098 systemd-networkd[1533]: califc6abf3937c: Gained IPv6LL Oct 13 05:44:54.618489 containerd[1633]: time="2025-10-13T05:44:54.618437579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-blgvw,Uid:cf14e942-f41d-4f00-b058-07c5501bf435,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:54.743057 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:37964.service - OpenSSH per-connection server daemon (10.0.0.1:37964). Oct 13 05:44:54.764802 kubelet[2828]: E1013 05:44:54.764761 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:54.824848 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 37964 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:44:54.826437 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:54.831072 systemd-logind[1609]: New session 11 of user core. Oct 13 05:44:54.847095 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:44:55.114274 sshd[4770]: Connection closed by 10.0.0.1 port 37964 Oct 13 05:44:55.114593 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:55.120370 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:37964.service: Deactivated successfully. Oct 13 05:44:55.123433 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:44:55.126575 systemd-logind[1609]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:44:55.127655 systemd-logind[1609]: Removed session 11. Oct 13 05:44:55.201705 systemd-networkd[1533]: cali4cd9b6579ce: Link UP Oct 13 05:44:55.202583 systemd-networkd[1533]: cali4cd9b6579ce: Gained carrier Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.133 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--blgvw-eth0 goldmane-54d579b49d- calico-system cf14e942-f41d-4f00-b058-07c5501bf435 904 0 2025-10-13 05:44:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-blgvw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4cd9b6579ce [] [] }} ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.133 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.165 [INFO][4804] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" HandleID="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Workload="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.165 [INFO][4804] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" HandleID="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Workload="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-blgvw", "timestamp":"2025-10-13 05:44:55.16536636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.165 [INFO][4804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.165 [INFO][4804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.166 [INFO][4804] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.172 [INFO][4804] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.177 [INFO][4804] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.180 [INFO][4804] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.182 [INFO][4804] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.184 [INFO][4804] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.184 [INFO][4804] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.185 [INFO][4804] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1 Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.188 [INFO][4804] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.195 [INFO][4804] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.195 [INFO][4804] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" host="localhost" Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.195 [INFO][4804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:55.228283 containerd[1633]: 2025-10-13 05:44:55.195 [INFO][4804] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" HandleID="k8s-pod-network.599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Workload="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" Oct 13 05:44:55.228966 containerd[1633]: 2025-10-13 05:44:55.198 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--blgvw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cf14e942-f41d-4f00-b058-07c5501bf435", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-blgvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4cd9b6579ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:55.228966 containerd[1633]: 2025-10-13 05:44:55.199 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" Oct 13 05:44:55.228966 containerd[1633]: 2025-10-13 05:44:55.199 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cd9b6579ce ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" Oct 13 05:44:55.228966 containerd[1633]: 2025-10-13 05:44:55.202 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" Oct 13 05:44:55.228966 containerd[1633]: 2025-10-13 05:44:55.202 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--blgvw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cf14e942-f41d-4f00-b058-07c5501bf435", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1", Pod:"goldmane-54d579b49d-blgvw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4cd9b6579ce", MAC:"26:55:a6:7d:ca:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:55.228966 containerd[1633]: 2025-10-13 05:44:55.217 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" Namespace="calico-system" Pod="goldmane-54d579b49d-blgvw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--blgvw-eth0" Oct 13 05:44:55.310241 containerd[1633]: time="2025-10-13T05:44:55.310196742Z" level=info msg="connecting to shim 599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1" address="unix:///run/containerd/s/21e271a4ecace5c7706ddb2f56765eb12fdd9f5b00393fdf878458edb7bb33b8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:55.351121 systemd[1]: Started cri-containerd-599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1.scope - libcontainer container 599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1. Oct 13 05:44:55.366760 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:55.398660 containerd[1633]: time="2025-10-13T05:44:55.398606296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-blgvw,Uid:cf14e942-f41d-4f00-b058-07c5501bf435,Namespace:calico-system,Attempt:0,} returns sandbox id \"599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1\"" Oct 13 05:44:55.618497 kubelet[2828]: E1013 05:44:55.618386 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:55.619174 containerd[1633]: time="2025-10-13T05:44:55.619076389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8qr9,Uid:72f7196b-78fc-4617-9b3e-c6a93eee680d,Namespace:kube-system,Attempt:0,}" Oct 13 05:44:55.619773 containerd[1633]: time="2025-10-13T05:44:55.619314819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-x5lcg,Uid:3f1b517f-a7d8-41ac-a557-d8b6e064d2dd,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:44:55.619773 containerd[1633]: time="2025-10-13T05:44:55.619282867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj54f,Uid:02edaea7-f337-4fcf-9037-ac41cfab2259,Namespace:calico-system,Attempt:0,}" Oct 13 05:44:55.722895 containerd[1633]: time="2025-10-13T05:44:55.722827035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:55.746547 containerd[1633]: time="2025-10-13T05:44:55.725097565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:44:55.746769 containerd[1633]: time="2025-10-13T05:44:55.730975627Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:55.746899 containerd[1633]: time="2025-10-13T05:44:55.733823722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.273306633s" Oct 13 05:44:55.746899 containerd[1633]: time="2025-10-13T05:44:55.746831125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:44:55.747419 containerd[1633]: time="2025-10-13T05:44:55.747395736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:55.750097 containerd[1633]: time="2025-10-13T05:44:55.749336398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:44:55.750426 containerd[1633]: time="2025-10-13T05:44:55.750399031Z" level=info msg="CreateContainer within sandbox \"7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:44:55.761971 containerd[1633]: time="2025-10-13T05:44:55.761774751Z" level=info msg="Container c453a7b58eee06e59587fe716f4bafeaf50e41da06dc348fea78958c174570ed: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:55.767424 kubelet[2828]: E1013 05:44:55.767049 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:55.775366 containerd[1633]: time="2025-10-13T05:44:55.775225580Z" level=info msg="CreateContainer within sandbox \"7a45f764a041b623c50fc065883667819668c1f6d27ced8ccd06119768f8ff17\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c453a7b58eee06e59587fe716f4bafeaf50e41da06dc348fea78958c174570ed\"" Oct 13 05:44:55.775714 systemd-networkd[1533]: cali362f86ebdf9: Link UP Oct 13 05:44:55.776705 containerd[1633]: time="2025-10-13T05:44:55.776080633Z" level=info msg="StartContainer for \"c453a7b58eee06e59587fe716f4bafeaf50e41da06dc348fea78958c174570ed\"" Oct 13 05:44:55.776557 systemd-networkd[1533]: cali362f86ebdf9: Gained carrier Oct 13 05:44:55.777541 containerd[1633]: time="2025-10-13T05:44:55.777506028Z" level=info msg="connecting to shim c453a7b58eee06e59587fe716f4bafeaf50e41da06dc348fea78958c174570ed" address="unix:///run/containerd/s/5e101674586c8a17338a03192bad61adb4a492f89d5a5fca39d2f647096b23ee" protocol=ttrpc version=3 Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.690 [INFO][4876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vj54f-eth0 csi-node-driver- calico-system 02edaea7-f337-4fcf-9037-ac41cfab2259 769 0 2025-10-13 05:44:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vj54f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali362f86ebdf9 [] [] }} ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.690 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-eth0" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.728 [INFO][4914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" HandleID="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Workload="localhost-k8s-csi--node--driver--vj54f-eth0" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.729 [INFO][4914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" HandleID="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Workload="localhost-k8s-csi--node--driver--vj54f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vj54f", "timestamp":"2025-10-13 05:44:55.728865688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.729 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.729 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.729 [INFO][4914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.739 [INFO][4914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.743 [INFO][4914] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.747 [INFO][4914] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.749 [INFO][4914] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.753 [INFO][4914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.753 [INFO][4914] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.754 [INFO][4914] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.758 [INFO][4914] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.763 [INFO][4914] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.763 [INFO][4914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" host="localhost" Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.763 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:55.798102 containerd[1633]: 2025-10-13 05:44:55.763 [INFO][4914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" HandleID="k8s-pod-network.150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Workload="localhost-k8s-csi--node--driver--vj54f-eth0" Oct 13 05:44:55.798645 containerd[1633]: 2025-10-13 05:44:55.771 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vj54f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02edaea7-f337-4fcf-9037-ac41cfab2259", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vj54f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali362f86ebdf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:55.798645 containerd[1633]: 2025-10-13 05:44:55.771 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-eth0" Oct 13 05:44:55.798645 containerd[1633]: 2025-10-13 05:44:55.771 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali362f86ebdf9 ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-eth0" Oct 13 05:44:55.798645 containerd[1633]: 2025-10-13 05:44:55.777 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-eth0" Oct 13 05:44:55.798645 containerd[1633]: 2025-10-13 05:44:55.777 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vj54f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02edaea7-f337-4fcf-9037-ac41cfab2259", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f", Pod:"csi-node-driver-vj54f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali362f86ebdf9", MAC:"f2:f0:03:94:6c:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:55.798645 containerd[1633]: 2025-10-13 05:44:55.791 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" Namespace="calico-system" Pod="csi-node-driver-vj54f" WorkloadEndpoint="localhost-k8s-csi--node--driver--vj54f-eth0" Oct 13 05:44:55.808341 systemd[1]: Started cri-containerd-c453a7b58eee06e59587fe716f4bafeaf50e41da06dc348fea78958c174570ed.scope - libcontainer container c453a7b58eee06e59587fe716f4bafeaf50e41da06dc348fea78958c174570ed. Oct 13 05:44:55.821493 containerd[1633]: time="2025-10-13T05:44:55.821439080Z" level=info msg="connecting to shim 150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f" address="unix:///run/containerd/s/0e7591ebe18c60bb5372d555af166ad941ab3ab14933a5ac3d253df7a8c1d361" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:55.850091 systemd[1]: Started cri-containerd-150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f.scope - libcontainer container 150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f. Oct 13 05:44:55.868337 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:55.871382 containerd[1633]: time="2025-10-13T05:44:55.870787098Z" level=info msg="StartContainer for \"c453a7b58eee06e59587fe716f4bafeaf50e41da06dc348fea78958c174570ed\" returns successfully" Oct 13 05:44:55.884305 systemd-networkd[1533]: cali687a3e61784: Link UP Oct 13 05:44:55.886211 systemd-networkd[1533]: cali687a3e61784: Gained carrier Oct 13 05:44:55.894476 containerd[1633]: time="2025-10-13T05:44:55.894422485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj54f,Uid:02edaea7-f337-4fcf-9037-ac41cfab2259,Namespace:calico-system,Attempt:0,} returns sandbox id \"150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f\"" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.689 [INFO][4878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0 coredns-668d6bf9bc- kube-system 72f7196b-78fc-4617-9b3e-c6a93eee680d 890 0 2025-10-13 05:44:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-v8qr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali687a3e61784 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.690 [INFO][4878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.734 [INFO][4916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" HandleID="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Workload="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.734 [INFO][4916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" HandleID="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Workload="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfb40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-v8qr9", "timestamp":"2025-10-13 05:44:55.734508465 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.735 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.763 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.763 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.840 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.845 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.851 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.852 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.854 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.854 [INFO][4916] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.856 [INFO][4916] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.860 [INFO][4916] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.867 [INFO][4916] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.867 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" host="localhost" Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.867 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:55.900416 containerd[1633]: 2025-10-13 05:44:55.867 [INFO][4916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" HandleID="k8s-pod-network.390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Workload="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" Oct 13 05:44:55.900901 containerd[1633]: 2025-10-13 05:44:55.874 [INFO][4878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"72f7196b-78fc-4617-9b3e-c6a93eee680d", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-v8qr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali687a3e61784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:55.900901 containerd[1633]: 2025-10-13 05:44:55.875 [INFO][4878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" Oct 13 05:44:55.900901 containerd[1633]: 2025-10-13 05:44:55.875 [INFO][4878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali687a3e61784 ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" Oct 13 05:44:55.900901 containerd[1633]: 2025-10-13 05:44:55.884 [INFO][4878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" Oct 13 05:44:55.900901 containerd[1633]: 2025-10-13 05:44:55.885 [INFO][4878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"72f7196b-78fc-4617-9b3e-c6a93eee680d", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e", Pod:"coredns-668d6bf9bc-v8qr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali687a3e61784", MAC:"e6:0c:24:0b:03:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:55.900901 containerd[1633]: 2025-10-13 05:44:55.896 [INFO][4878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-v8qr9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v8qr9-eth0" Oct 13 05:44:55.922944 containerd[1633]: time="2025-10-13T05:44:55.922894120Z" level=info msg="connecting to shim 390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e" address="unix:///run/containerd/s/ba8f5a7a53661b46b4365882429a4456942b90550dfc9a8f08b0ddc5c279c335" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:55.955076 systemd[1]: Started cri-containerd-390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e.scope - libcontainer container 390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e. Oct 13 05:44:55.972177 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:56.012423 containerd[1633]: time="2025-10-13T05:44:56.012369084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8qr9,Uid:72f7196b-78fc-4617-9b3e-c6a93eee680d,Namespace:kube-system,Attempt:0,} returns sandbox id \"390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e\"" Oct 13 05:44:56.013758 kubelet[2828]: E1013 05:44:56.013726 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:56.016628 containerd[1633]: time="2025-10-13T05:44:56.016592231Z" level=info msg="CreateContainer within sandbox \"390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:44:56.026626 containerd[1633]: time="2025-10-13T05:44:56.026589810Z" level=info msg="Container e835e3d5fade9ebd0d035517297ce9a58b21a9b8806438330c8ad2e126ae656d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:56.041805 systemd-networkd[1533]: cali41c23a99d51: Link UP Oct 13 05:44:56.043001 systemd-networkd[1533]: cali41c23a99d51: Gained carrier Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.695 [INFO][4889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0 calico-apiserver-5f56546f6c- calico-apiserver 3f1b517f-a7d8-41ac-a557-d8b6e064d2dd 903 0 2025-10-13 05:44:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f56546f6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f56546f6c-x5lcg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali41c23a99d51 [] [] }} ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.695 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.736 [INFO][4922] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" HandleID="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Workload="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.736 [INFO][4922] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" HandleID="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Workload="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f56546f6c-x5lcg", "timestamp":"2025-10-13 05:44:55.736563798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.736 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.867 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.868 [INFO][4922] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.953 [INFO][4922] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.969 [INFO][4922] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.986 [INFO][4922] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:55.992 [INFO][4922] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.005 [INFO][4922] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.006 [INFO][4922] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.009 [INFO][4922] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879 Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.018 [INFO][4922] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.032 [INFO][4922] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.032 [INFO][4922] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" host="localhost" Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.032 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:44:56.075472 containerd[1633]: 2025-10-13 05:44:56.032 [INFO][4922] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" HandleID="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Workload="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:44:56.078032 containerd[1633]: 2025-10-13 05:44:56.037 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0", GenerateName:"calico-apiserver-5f56546f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f56546f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f56546f6c-x5lcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41c23a99d51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:56.078032 containerd[1633]: 2025-10-13 05:44:56.037 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:44:56.078032 containerd[1633]: 2025-10-13 05:44:56.037 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41c23a99d51 ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:44:56.078032 containerd[1633]: 2025-10-13 05:44:56.043 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:44:56.078032 containerd[1633]: 2025-10-13 05:44:56.043 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0", GenerateName:"calico-apiserver-5f56546f6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f56546f6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879", Pod:"calico-apiserver-5f56546f6c-x5lcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41c23a99d51", MAC:"2e:50:17:8c:80:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:44:56.078032 containerd[1633]: 2025-10-13 05:44:56.071 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Namespace="calico-apiserver" Pod="calico-apiserver-5f56546f6c-x5lcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:44:56.110875 containerd[1633]: time="2025-10-13T05:44:56.110820212Z" level=info msg="CreateContainer within sandbox \"390aa8c7a13a2e047c6143755d183e6672aa6d1370ceb8d40c04b1025e41bb5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e835e3d5fade9ebd0d035517297ce9a58b21a9b8806438330c8ad2e126ae656d\"" Oct 13 05:44:56.111957 containerd[1633]: time="2025-10-13T05:44:56.111648532Z" level=info msg="StartContainer for \"e835e3d5fade9ebd0d035517297ce9a58b21a9b8806438330c8ad2e126ae656d\"" Oct 13 05:44:56.113732 containerd[1633]: time="2025-10-13T05:44:56.113700094Z" level=info msg="connecting to shim e835e3d5fade9ebd0d035517297ce9a58b21a9b8806438330c8ad2e126ae656d" address="unix:///run/containerd/s/ba8f5a7a53661b46b4365882429a4456942b90550dfc9a8f08b0ddc5c279c335" protocol=ttrpc version=3 Oct 13 05:44:56.131063 containerd[1633]: time="2025-10-13T05:44:56.129902578Z" level=info msg="connecting to shim 6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" address="unix:///run/containerd/s/0459a2c29f7bb387aaf9519c937c698b299a987f01452e34e0b769dbc36e76be" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:44:56.137117 systemd[1]: Started cri-containerd-e835e3d5fade9ebd0d035517297ce9a58b21a9b8806438330c8ad2e126ae656d.scope - libcontainer container e835e3d5fade9ebd0d035517297ce9a58b21a9b8806438330c8ad2e126ae656d. Oct 13 05:44:56.166199 systemd[1]: Started cri-containerd-6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879.scope - libcontainer container 6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879. Oct 13 05:44:56.185163 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:44:56.187945 containerd[1633]: time="2025-10-13T05:44:56.187467489Z" level=info msg="StartContainer for \"e835e3d5fade9ebd0d035517297ce9a58b21a9b8806438330c8ad2e126ae656d\" returns successfully" Oct 13 05:44:56.230337 containerd[1633]: time="2025-10-13T05:44:56.230288694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f56546f6c-x5lcg,Uid:3f1b517f-a7d8-41ac-a557-d8b6e064d2dd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\"" Oct 13 05:44:56.234714 containerd[1633]: time="2025-10-13T05:44:56.234612524Z" level=info msg="CreateContainer within sandbox \"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:44:56.250888 containerd[1633]: time="2025-10-13T05:44:56.250838824Z" level=info msg="Container 2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:56.257084 containerd[1633]: time="2025-10-13T05:44:56.257047075Z" level=info msg="CreateContainer within sandbox \"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\"" Oct 13 05:44:56.257954 containerd[1633]: time="2025-10-13T05:44:56.257550477Z" level=info msg="StartContainer for \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\"" Oct 13 05:44:56.258553 containerd[1633]: time="2025-10-13T05:44:56.258515801Z" level=info msg="connecting to shim 2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70" address="unix:///run/containerd/s/0459a2c29f7bb387aaf9519c937c698b299a987f01452e34e0b769dbc36e76be" protocol=ttrpc version=3 Oct 13 05:44:56.283084 systemd[1]: Started cri-containerd-2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70.scope - libcontainer container 2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70. Oct 13 05:44:56.337966 containerd[1633]: time="2025-10-13T05:44:56.337896717Z" level=info msg="StartContainer for \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" returns successfully" Oct 13 05:44:56.773354 kubelet[2828]: E1013 05:44:56.773208 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:56.790891 kubelet[2828]: I1013 05:44:56.789485 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v8qr9" podStartSLOduration=40.789468091 podStartE2EDuration="40.789468091s" podCreationTimestamp="2025-10-13 05:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:44:56.788578043 +0000 UTC m=+48.254870304" watchObservedRunningTime="2025-10-13 05:44:56.789468091 +0000 UTC m=+48.255760352" Oct 13 05:44:56.791227 systemd-networkd[1533]: cali4cd9b6579ce: Gained IPv6LL Oct 13 05:44:56.836819 kubelet[2828]: I1013 05:44:56.836746 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f56546f6c-jppm5" podStartSLOduration=27.230281063 podStartE2EDuration="30.83672842s" podCreationTimestamp="2025-10-13 05:44:26 +0000 UTC" firstStartedPulling="2025-10-13 05:44:52.142287147 +0000 UTC m=+43.608579408" lastFinishedPulling="2025-10-13 05:44:55.748734504 +0000 UTC m=+47.215026765" observedRunningTime="2025-10-13 05:44:56.819060956 +0000 UTC m=+48.285353218" watchObservedRunningTime="2025-10-13 05:44:56.83672842 +0000 UTC m=+48.303020671" Oct 13 05:44:57.111170 systemd-networkd[1533]: cali41c23a99d51: Gained IPv6LL Oct 13 05:44:57.239376 systemd-networkd[1533]: cali687a3e61784: Gained IPv6LL Oct 13 05:44:57.751609 systemd-networkd[1533]: cali362f86ebdf9: Gained IPv6LL Oct 13 05:44:57.787988 kubelet[2828]: E1013 05:44:57.787760 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:57.789954 kubelet[2828]: I1013 05:44:57.788639 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:44:57.789954 kubelet[2828]: I1013 05:44:57.789105 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:44:58.790051 kubelet[2828]: E1013 05:44:58.790011 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:44:59.823582 containerd[1633]: time="2025-10-13T05:44:59.823507849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:59.824744 containerd[1633]: time="2025-10-13T05:44:59.824708773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:44:59.827020 containerd[1633]: time="2025-10-13T05:44:59.826989278Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:59.829210 containerd[1633]: time="2025-10-13T05:44:59.829151996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:59.829677 containerd[1633]: time="2025-10-13T05:44:59.829633844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.080270564s" Oct 13 05:44:59.829677 containerd[1633]: time="2025-10-13T05:44:59.829663301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:44:59.830629 containerd[1633]: time="2025-10-13T05:44:59.830603022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:44:59.840002 containerd[1633]: time="2025-10-13T05:44:59.839512720Z" level=info msg="CreateContainer within sandbox \"b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:44:59.847028 containerd[1633]: time="2025-10-13T05:44:59.846988736Z" level=info msg="Container f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:44:59.854914 containerd[1633]: time="2025-10-13T05:44:59.854871385Z" level=info msg="CreateContainer within sandbox \"b74db209f8ac0a529a9fc04eea0cc5d47ade6242c6b9bb4ba498c20071c57a54\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786\"" Oct 13 05:44:59.855368 containerd[1633]: time="2025-10-13T05:44:59.855348736Z" level=info msg="StartContainer for \"f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786\"" Oct 13 05:44:59.856523 containerd[1633]: time="2025-10-13T05:44:59.856459486Z" level=info msg="connecting to shim f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786" address="unix:///run/containerd/s/c391bc37966971f679b8d6a2b0ddd4b2da089e47e237561eeb8343b959afbb7e" protocol=ttrpc version=3 Oct 13 05:44:59.915080 systemd[1]: Started cri-containerd-f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786.scope - libcontainer container f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786. Oct 13 05:44:59.981679 containerd[1633]: time="2025-10-13T05:44:59.981553036Z" level=info msg="StartContainer for \"f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786\" returns successfully" Oct 13 05:45:00.129961 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:44776.service - OpenSSH per-connection server daemon (10.0.0.1:44776). Oct 13 05:45:00.213212 sshd[5271]: Accepted publickey for core from 10.0.0.1 port 44776 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:00.214860 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:00.219536 systemd-logind[1609]: New session 12 of user core. Oct 13 05:45:00.224138 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:45:00.382603 sshd[5276]: Connection closed by 10.0.0.1 port 44776 Oct 13 05:45:00.382843 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:00.388220 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:44776.service: Deactivated successfully. Oct 13 05:45:00.390672 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:45:00.391478 systemd-logind[1609]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:45:00.392686 systemd-logind[1609]: Removed session 12. Oct 13 05:45:00.520229 containerd[1633]: time="2025-10-13T05:45:00.520172626Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:00.529030 containerd[1633]: time="2025-10-13T05:45:00.528866586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:45:00.534520 containerd[1633]: time="2025-10-13T05:45:00.534482951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 703.853448ms" Oct 13 05:45:00.534520 containerd[1633]: time="2025-10-13T05:45:00.534514362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:45:00.542611 containerd[1633]: time="2025-10-13T05:45:00.542550166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:45:00.549943 containerd[1633]: time="2025-10-13T05:45:00.549894487Z" level=info msg="CreateContainer within sandbox \"7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:45:00.558996 containerd[1633]: time="2025-10-13T05:45:00.558943532Z" level=info msg="Container c6db31111a80e681093b01cd2a4051702309b199ac2eb1ee5cb4e18927fae61c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:00.566271 containerd[1633]: time="2025-10-13T05:45:00.566245533Z" level=info msg="CreateContainer within sandbox \"7be128276717eeec8b4fe4d366313a74f8206591a95dcbbf2cf4fed20fe05c4d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6db31111a80e681093b01cd2a4051702309b199ac2eb1ee5cb4e18927fae61c\"" Oct 13 05:45:00.568055 containerd[1633]: time="2025-10-13T05:45:00.566847883Z" level=info msg="StartContainer for \"c6db31111a80e681093b01cd2a4051702309b199ac2eb1ee5cb4e18927fae61c\"" Oct 13 05:45:00.569311 containerd[1633]: time="2025-10-13T05:45:00.569273513Z" level=info msg="connecting to shim c6db31111a80e681093b01cd2a4051702309b199ac2eb1ee5cb4e18927fae61c" address="unix:///run/containerd/s/22e69fa9bd9d9188df4e01e26f3e36be286fe7ae92b0daeb2799be351c4dbb5d" protocol=ttrpc version=3 Oct 13 05:45:00.591093 systemd[1]: Started cri-containerd-c6db31111a80e681093b01cd2a4051702309b199ac2eb1ee5cb4e18927fae61c.scope - libcontainer container c6db31111a80e681093b01cd2a4051702309b199ac2eb1ee5cb4e18927fae61c. Oct 13 05:45:00.712773 containerd[1633]: time="2025-10-13T05:45:00.712650496Z" level=info msg="StartContainer for \"c6db31111a80e681093b01cd2a4051702309b199ac2eb1ee5cb4e18927fae61c\" returns successfully" Oct 13 05:45:00.813774 kubelet[2828]: I1013 05:45:00.813693 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f56546f6c-x5lcg" podStartSLOduration=34.813677932 podStartE2EDuration="34.813677932s" podCreationTimestamp="2025-10-13 05:44:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:44:56.834433086 +0000 UTC m=+48.300725347" watchObservedRunningTime="2025-10-13 05:45:00.813677932 +0000 UTC m=+52.279970193" Oct 13 05:45:00.816012 kubelet[2828]: I1013 05:45:00.815960 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c6448884-mgzvk" podStartSLOduration=26.560344287 podStartE2EDuration="34.815952401s" podCreationTimestamp="2025-10-13 05:44:26 +0000 UTC" firstStartedPulling="2025-10-13 05:44:52.286647814 +0000 UTC m=+43.752940076" lastFinishedPulling="2025-10-13 05:45:00.542255929 +0000 UTC m=+52.008548190" observedRunningTime="2025-10-13 05:45:00.814965932 +0000 UTC m=+52.281258213" watchObservedRunningTime="2025-10-13 05:45:00.815952401 +0000 UTC m=+52.282244682" Oct 13 05:45:00.827792 kubelet[2828]: I1013 05:45:00.827673 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dcd7d6f54-swntb" podStartSLOduration=24.193471919 podStartE2EDuration="31.827660276s" podCreationTimestamp="2025-10-13 05:44:29 +0000 UTC" firstStartedPulling="2025-10-13 05:44:52.196276949 +0000 UTC m=+43.662569200" lastFinishedPulling="2025-10-13 05:44:59.830465296 +0000 UTC m=+51.296757557" observedRunningTime="2025-10-13 05:45:00.827448058 +0000 UTC m=+52.293740319" watchObservedRunningTime="2025-10-13 05:45:00.827660276 +0000 UTC m=+52.293952537" Oct 13 05:45:00.855930 containerd[1633]: time="2025-10-13T05:45:00.855873843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786\" id:\"7b264fd11b5de2d8ae974e688dbedb54ca4ad0e2d31bfe3011980b20918d6de0\" pid:5342 exited_at:{seconds:1760334300 nanos:855578866}" Oct 13 05:45:01.803623 kubelet[2828]: I1013 05:45:01.803576 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:45:02.974815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4108614929.mount: Deactivated successfully. Oct 13 05:45:03.316956 containerd[1633]: time="2025-10-13T05:45:03.316891013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:03.317789 containerd[1633]: time="2025-10-13T05:45:03.317757217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:45:03.318900 containerd[1633]: time="2025-10-13T05:45:03.318868883Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:03.321124 containerd[1633]: time="2025-10-13T05:45:03.321085372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:03.321631 containerd[1633]: time="2025-10-13T05:45:03.321600052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.779002355s" Oct 13 05:45:03.321631 containerd[1633]: time="2025-10-13T05:45:03.321629177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:45:03.322597 containerd[1633]: time="2025-10-13T05:45:03.322561399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:45:03.323694 containerd[1633]: time="2025-10-13T05:45:03.323662074Z" level=info msg="CreateContainer within sandbox \"9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:45:03.334756 containerd[1633]: time="2025-10-13T05:45:03.334720424Z" level=info msg="Container 07953fb6b36fc72317b8f8ce3465ebdbd616f9f5e80845d9f6cb52a3085254b2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:03.341661 containerd[1633]: time="2025-10-13T05:45:03.341620965Z" level=info msg="CreateContainer within sandbox \"9a52079ff0c393d621d78fb1e106f3b37f2b2d883862d1e5fbeb10d8818fa4b1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"07953fb6b36fc72317b8f8ce3465ebdbd616f9f5e80845d9f6cb52a3085254b2\"" Oct 13 05:45:03.343203 containerd[1633]: time="2025-10-13T05:45:03.342108963Z" level=info msg="StartContainer for \"07953fb6b36fc72317b8f8ce3465ebdbd616f9f5e80845d9f6cb52a3085254b2\"" Oct 13 05:45:03.343203 containerd[1633]: time="2025-10-13T05:45:03.343089568Z" level=info msg="connecting to shim 07953fb6b36fc72317b8f8ce3465ebdbd616f9f5e80845d9f6cb52a3085254b2" address="unix:///run/containerd/s/c896b54e71ba03b37fead506c91cee42d214b8a4e35b02daa0efc9d4cbd5cb3d" protocol=ttrpc version=3 Oct 13 05:45:03.370087 systemd[1]: Started cri-containerd-07953fb6b36fc72317b8f8ce3465ebdbd616f9f5e80845d9f6cb52a3085254b2.scope - libcontainer container 07953fb6b36fc72317b8f8ce3465ebdbd616f9f5e80845d9f6cb52a3085254b2. Oct 13 05:45:03.492167 containerd[1633]: time="2025-10-13T05:45:03.492119083Z" level=info msg="StartContainer for \"07953fb6b36fc72317b8f8ce3465ebdbd616f9f5e80845d9f6cb52a3085254b2\" returns successfully" Oct 13 05:45:05.401605 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:44780.service - OpenSSH per-connection server daemon (10.0.0.1:44780). Oct 13 05:45:05.493199 sshd[5408]: Accepted publickey for core from 10.0.0.1 port 44780 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:05.494807 sshd-session[5408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:05.499974 systemd-logind[1609]: New session 13 of user core. Oct 13 05:45:05.508074 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:45:06.045275 sshd[5411]: Connection closed by 10.0.0.1 port 44780 Oct 13 05:45:06.045574 sshd-session[5408]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:06.059738 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:44780.service: Deactivated successfully. Oct 13 05:45:06.061958 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:45:06.062771 systemd-logind[1609]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:45:06.066145 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:44786.service - OpenSSH per-connection server daemon (10.0.0.1:44786). Oct 13 05:45:06.066756 systemd-logind[1609]: Removed session 13. Oct 13 05:45:06.123438 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 44786 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:06.124664 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:06.129473 systemd-logind[1609]: New session 14 of user core. Oct 13 05:45:06.135108 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:45:06.337011 sshd[5430]: Connection closed by 10.0.0.1 port 44786 Oct 13 05:45:06.338193 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:06.356543 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:44786.service: Deactivated successfully. Oct 13 05:45:06.361438 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:45:06.365174 systemd-logind[1609]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:45:06.370499 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:44802.service - OpenSSH per-connection server daemon (10.0.0.1:44802). Oct 13 05:45:06.373320 systemd-logind[1609]: Removed session 14. Oct 13 05:45:06.431559 sshd[5446]: Accepted publickey for core from 10.0.0.1 port 44802 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:06.433605 sshd-session[5446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:06.439623 systemd-logind[1609]: New session 15 of user core. Oct 13 05:45:06.446178 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:45:06.573960 sshd[5449]: Connection closed by 10.0.0.1 port 44802 Oct 13 05:45:06.574154 sshd-session[5446]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:06.579364 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:44802.service: Deactivated successfully. Oct 13 05:45:06.581909 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:45:06.582870 systemd-logind[1609]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:45:06.584707 systemd-logind[1609]: Removed session 15. Oct 13 05:45:06.648055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504614413.mount: Deactivated successfully. Oct 13 05:45:07.279385 containerd[1633]: time="2025-10-13T05:45:07.279319561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:07.280093 containerd[1633]: time="2025-10-13T05:45:07.279978354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:45:07.281274 containerd[1633]: time="2025-10-13T05:45:07.281213782Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:07.283494 containerd[1633]: time="2025-10-13T05:45:07.283441112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:07.284051 containerd[1633]: time="2025-10-13T05:45:07.284017848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.961426321s" Oct 13 05:45:07.284051 containerd[1633]: time="2025-10-13T05:45:07.284047054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:45:07.285172 containerd[1633]: time="2025-10-13T05:45:07.284837890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:45:07.286157 containerd[1633]: time="2025-10-13T05:45:07.286098497Z" level=info msg="CreateContainer within sandbox \"599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:45:07.305859 containerd[1633]: time="2025-10-13T05:45:07.305807058Z" level=info msg="Container bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:07.314779 containerd[1633]: time="2025-10-13T05:45:07.314731496Z" level=info msg="CreateContainer within sandbox \"599097417bd9bd9f560f04f17b9424091283b188e1a6fcf6645f0b296deb16d1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972\"" Oct 13 05:45:07.315236 containerd[1633]: time="2025-10-13T05:45:07.315203180Z" level=info msg="StartContainer for \"bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972\"" Oct 13 05:45:07.316291 containerd[1633]: time="2025-10-13T05:45:07.316265698Z" level=info msg="connecting to shim bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972" address="unix:///run/containerd/s/21e271a4ecace5c7706ddb2f56765eb12fdd9f5b00393fdf878458edb7bb33b8" protocol=ttrpc version=3 Oct 13 05:45:07.344065 systemd[1]: Started cri-containerd-bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972.scope - libcontainer container bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972. Oct 13 05:45:07.394384 containerd[1633]: time="2025-10-13T05:45:07.394335587Z" level=info msg="StartContainer for \"bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972\" returns successfully" Oct 13 05:45:07.832533 kubelet[2828]: I1013 05:45:07.832449 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bb575568-6qcvz" podStartSLOduration=6.59811322 podStartE2EDuration="17.832430087s" podCreationTimestamp="2025-10-13 05:44:50 +0000 UTC" firstStartedPulling="2025-10-13 05:44:52.088111146 +0000 UTC m=+43.554403397" lastFinishedPulling="2025-10-13 05:45:03.322428003 +0000 UTC m=+54.788720264" observedRunningTime="2025-10-13 05:45:03.829142931 +0000 UTC m=+55.295435192" watchObservedRunningTime="2025-10-13 05:45:07.832430087 +0000 UTC m=+59.298722348" Oct 13 05:45:07.906824 containerd[1633]: time="2025-10-13T05:45:07.906747865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972\" id:\"91be85392aa0dc832603626db039b3f1ec1d73626bee007a7ea520e770764f03\" pid:5518 exit_status:1 exited_at:{seconds:1760334307 nanos:906402012}" Oct 13 05:45:08.895901 containerd[1633]: time="2025-10-13T05:45:08.895819058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972\" id:\"a342c64b90acef51c1ba7fae17d50b666bd08b4add06e106dd834f8fd7fbc7a9\" pid:5546 exit_status:1 exited_at:{seconds:1760334308 nanos:895523461}" Oct 13 05:45:09.729108 containerd[1633]: time="2025-10-13T05:45:09.729048218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:09.729852 containerd[1633]: time="2025-10-13T05:45:09.729811429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:45:09.730832 containerd[1633]: time="2025-10-13T05:45:09.730792708Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:09.732863 containerd[1633]: time="2025-10-13T05:45:09.732831342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:09.733407 containerd[1633]: time="2025-10-13T05:45:09.733373811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.448511444s" Oct 13 05:45:09.733407 containerd[1633]: time="2025-10-13T05:45:09.733403067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:45:09.735161 containerd[1633]: time="2025-10-13T05:45:09.735120315Z" level=info msg="CreateContainer within sandbox \"150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:45:09.744570 containerd[1633]: time="2025-10-13T05:45:09.744529313Z" level=info msg="Container 1e69d32af30af81b7d2c85879d813674fc7c4149980acdd199dc2471efb8bc0e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:09.752738 containerd[1633]: time="2025-10-13T05:45:09.752698575Z" level=info msg="CreateContainer within sandbox \"150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1e69d32af30af81b7d2c85879d813674fc7c4149980acdd199dc2471efb8bc0e\"" Oct 13 05:45:09.753141 containerd[1633]: time="2025-10-13T05:45:09.753117117Z" level=info msg="StartContainer for \"1e69d32af30af81b7d2c85879d813674fc7c4149980acdd199dc2471efb8bc0e\"" Oct 13 05:45:09.754571 containerd[1633]: time="2025-10-13T05:45:09.754531656Z" level=info msg="connecting to shim 1e69d32af30af81b7d2c85879d813674fc7c4149980acdd199dc2471efb8bc0e" address="unix:///run/containerd/s/0e7591ebe18c60bb5372d555af166ad941ab3ab14933a5ac3d253df7a8c1d361" protocol=ttrpc version=3 Oct 13 05:45:09.790061 systemd[1]: Started cri-containerd-1e69d32af30af81b7d2c85879d813674fc7c4149980acdd199dc2471efb8bc0e.scope - libcontainer container 1e69d32af30af81b7d2c85879d813674fc7c4149980acdd199dc2471efb8bc0e. Oct 13 05:45:09.837892 containerd[1633]: time="2025-10-13T05:45:09.837830974Z" level=info msg="StartContainer for \"1e69d32af30af81b7d2c85879d813674fc7c4149980acdd199dc2471efb8bc0e\" returns successfully" Oct 13 05:45:09.839660 containerd[1633]: time="2025-10-13T05:45:09.839618508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:45:11.490776 containerd[1633]: time="2025-10-13T05:45:11.490721581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:11.512185 containerd[1633]: time="2025-10-13T05:45:11.491546800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:45:11.512185 containerd[1633]: time="2025-10-13T05:45:11.501199505Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:11.512356 containerd[1633]: time="2025-10-13T05:45:11.503870073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.664212891s" Oct 13 05:45:11.512356 containerd[1633]: time="2025-10-13T05:45:11.512302594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:45:11.512715 containerd[1633]: time="2025-10-13T05:45:11.512676910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:11.514555 containerd[1633]: time="2025-10-13T05:45:11.514524585Z" level=info msg="CreateContainer within sandbox \"150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:45:11.521935 containerd[1633]: time="2025-10-13T05:45:11.521875777Z" level=info msg="Container 2aede14558d450d8a337246b292726444837a7019b415d960bdfa632c70acd18: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:11.538551 containerd[1633]: time="2025-10-13T05:45:11.538496540Z" level=info msg="CreateContainer within sandbox \"150c61d3c439fb588637bfd992ea4362adf441c58c16e8f8391d0c031464d62f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2aede14558d450d8a337246b292726444837a7019b415d960bdfa632c70acd18\"" Oct 13 05:45:11.539027 containerd[1633]: time="2025-10-13T05:45:11.538967551Z" level=info msg="StartContainer for \"2aede14558d450d8a337246b292726444837a7019b415d960bdfa632c70acd18\"" Oct 13 05:45:11.540391 containerd[1633]: time="2025-10-13T05:45:11.540355998Z" level=info msg="connecting to shim 2aede14558d450d8a337246b292726444837a7019b415d960bdfa632c70acd18" address="unix:///run/containerd/s/0e7591ebe18c60bb5372d555af166ad941ab3ab14933a5ac3d253df7a8c1d361" protocol=ttrpc version=3 Oct 13 05:45:11.559056 systemd[1]: Started cri-containerd-2aede14558d450d8a337246b292726444837a7019b415d960bdfa632c70acd18.scope - libcontainer container 2aede14558d450d8a337246b292726444837a7019b415d960bdfa632c70acd18. Oct 13 05:45:11.589136 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:52562.service - OpenSSH per-connection server daemon (10.0.0.1:52562). Oct 13 05:45:11.628214 containerd[1633]: time="2025-10-13T05:45:11.628138046Z" level=info msg="StartContainer for \"2aede14558d450d8a337246b292726444837a7019b415d960bdfa632c70acd18\" returns successfully" Oct 13 05:45:11.662189 containerd[1633]: time="2025-10-13T05:45:11.662132183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786\" id:\"47b7c7ddead0cb14fab8d40983ba3277ae3a9f78a50c843da6028a8405564008\" pid:5639 exited_at:{seconds:1760334311 nanos:661886754}" Oct 13 05:45:11.683494 kubelet[2828]: I1013 05:45:11.683458 2828 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:45:11.683494 kubelet[2828]: I1013 05:45:11.683498 2828 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:45:11.696357 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 52562 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:11.698354 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:11.705276 systemd-logind[1609]: New session 16 of user core. Oct 13 05:45:11.716076 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:45:11.856698 kubelet[2828]: I1013 05:45:11.856551 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-blgvw" podStartSLOduration=31.972006401 podStartE2EDuration="43.856536108s" podCreationTimestamp="2025-10-13 05:44:28 +0000 UTC" firstStartedPulling="2025-10-13 05:44:55.400187492 +0000 UTC m=+46.866479753" lastFinishedPulling="2025-10-13 05:45:07.284717199 +0000 UTC m=+58.751009460" observedRunningTime="2025-10-13 05:45:07.833392853 +0000 UTC m=+59.299685114" watchObservedRunningTime="2025-10-13 05:45:11.856536108 +0000 UTC m=+63.322828369" Oct 13 05:45:11.919069 sshd[5659]: Connection closed by 10.0.0.1 port 52562 Oct 13 05:45:11.919419 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:11.924575 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:52562.service: Deactivated successfully. Oct 13 05:45:11.926703 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:45:11.927579 systemd-logind[1609]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:45:11.928863 systemd-logind[1609]: Removed session 16. Oct 13 05:45:16.103039 kubelet[2828]: I1013 05:45:16.102974 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:45:16.120303 kubelet[2828]: I1013 05:45:16.119895 2828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vj54f" podStartSLOduration=31.50353416 podStartE2EDuration="47.119863241s" podCreationTimestamp="2025-10-13 05:44:29 +0000 UTC" firstStartedPulling="2025-10-13 05:44:55.896993524 +0000 UTC m=+47.363285785" lastFinishedPulling="2025-10-13 05:45:11.513322605 +0000 UTC m=+62.979614866" observedRunningTime="2025-10-13 05:45:11.856353979 +0000 UTC m=+63.322646241" watchObservedRunningTime="2025-10-13 05:45:16.119863241 +0000 UTC m=+67.586155522" Oct 13 05:45:16.941559 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:59644.service - OpenSSH per-connection server daemon (10.0.0.1:59644). Oct 13 05:45:17.002058 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 59644 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:17.003822 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:17.008809 systemd-logind[1609]: New session 17 of user core. Oct 13 05:45:17.021054 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:45:17.143955 sshd[5686]: Connection closed by 10.0.0.1 port 59644 Oct 13 05:45:17.144363 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:17.148805 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:59644.service: Deactivated successfully. Oct 13 05:45:17.150995 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:45:17.152040 systemd-logind[1609]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:45:17.153200 systemd-logind[1609]: Removed session 17. Oct 13 05:45:21.821309 containerd[1633]: time="2025-10-13T05:45:21.821250557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628\" id:\"32508cab6010d3b636a19b1c886052534b2790885fc765c4b7472638a74a9bed\" pid:5714 exited_at:{seconds:1760334321 nanos:820963860}" Oct 13 05:45:22.156951 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:59646.service - OpenSSH per-connection server daemon (10.0.0.1:59646). Oct 13 05:45:22.248468 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 59646 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:22.250122 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:22.254688 systemd-logind[1609]: New session 18 of user core. Oct 13 05:45:22.271130 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:45:22.422785 sshd[5732]: Connection closed by 10.0.0.1 port 59646 Oct 13 05:45:22.423131 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:22.428761 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:59646.service: Deactivated successfully. Oct 13 05:45:22.431108 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:45:22.432214 systemd-logind[1609]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:45:22.433835 systemd-logind[1609]: Removed session 18. Oct 13 05:45:27.434905 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:49352.service - OpenSSH per-connection server daemon (10.0.0.1:49352). Oct 13 05:45:27.494036 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 49352 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:27.495291 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:27.499461 systemd-logind[1609]: New session 19 of user core. Oct 13 05:45:27.511096 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:45:27.617389 kubelet[2828]: E1013 05:45:27.617347 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:45:27.651676 sshd[5748]: Connection closed by 10.0.0.1 port 49352 Oct 13 05:45:27.652020 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:27.657044 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:49352.service: Deactivated successfully. Oct 13 05:45:27.659232 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:45:27.660073 systemd-logind[1609]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:45:27.661270 systemd-logind[1609]: Removed session 19. Oct 13 05:45:28.973710 kubelet[2828]: I1013 05:45:28.973666 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:45:29.154529 kubelet[2828]: I1013 05:45:29.154472 2828 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:45:29.167388 containerd[1633]: time="2025-10-13T05:45:29.167308646Z" level=info msg="StopContainer for \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" with timeout 30 (s)" Oct 13 05:45:29.174400 containerd[1633]: time="2025-10-13T05:45:29.174360903Z" level=info msg="Stop container \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" with signal terminated" Oct 13 05:45:29.188169 systemd[1]: cri-containerd-2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70.scope: Deactivated successfully. Oct 13 05:45:29.189611 systemd[1]: cri-containerd-2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70.scope: Consumed 1.321s CPU time, 44M memory peak. Oct 13 05:45:29.194876 containerd[1633]: time="2025-10-13T05:45:29.194816082Z" level=info msg="received exit event container_id:\"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" id:\"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" pid:5192 exit_status:1 exited_at:{seconds:1760334329 nanos:190813718}" Oct 13 05:45:29.195128 containerd[1633]: time="2025-10-13T05:45:29.194860296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" id:\"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" pid:5192 exit_status:1 exited_at:{seconds:1760334329 nanos:190813718}" Oct 13 05:45:29.224485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70-rootfs.mount: Deactivated successfully. Oct 13 05:45:29.814525 containerd[1633]: time="2025-10-13T05:45:29.814459048Z" level=info msg="StopContainer for \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" returns successfully" Oct 13 05:45:29.817277 containerd[1633]: time="2025-10-13T05:45:29.817213049Z" level=info msg="StopPodSandbox for \"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\"" Oct 13 05:45:29.827686 containerd[1633]: time="2025-10-13T05:45:29.827650066Z" level=info msg="Container to stop \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:45:29.842837 systemd[1]: cri-containerd-6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879.scope: Deactivated successfully. Oct 13 05:45:29.845616 containerd[1633]: time="2025-10-13T05:45:29.845563276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\" id:\"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\" pid:5151 exit_status:137 exited_at:{seconds:1760334329 nanos:845140803}" Oct 13 05:45:29.873040 containerd[1633]: time="2025-10-13T05:45:29.873005248Z" level=info msg="shim disconnected" id=6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879 namespace=k8s.io Oct 13 05:45:29.873040 containerd[1633]: time="2025-10-13T05:45:29.873034192Z" level=warning msg="cleaning up after shim disconnected" id=6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879 namespace=k8s.io Oct 13 05:45:29.873163 containerd[1633]: time="2025-10-13T05:45:29.873041968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:45:29.875011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879-rootfs.mount: Deactivated successfully. Oct 13 05:45:30.090649 containerd[1633]: time="2025-10-13T05:45:30.090335672Z" level=info msg="received exit event sandbox_id:\"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\" exit_status:137 exited_at:{seconds:1760334329 nanos:845140803}" Oct 13 05:45:30.093242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879-shm.mount: Deactivated successfully. Oct 13 05:45:30.189218 systemd-networkd[1533]: cali41c23a99d51: Link DOWN Oct 13 05:45:30.189231 systemd-networkd[1533]: cali41c23a99d51: Lost carrier Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.186 [INFO][5835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.186 [INFO][5835] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" iface="eth0" netns="/var/run/netns/cni-d1d10849-37a3-1d78-01bd-b02a9b97828b" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.187 [INFO][5835] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" iface="eth0" netns="/var/run/netns/cni-d1d10849-37a3-1d78-01bd-b02a9b97828b" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.196 [INFO][5835] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" after=9.138666ms iface="eth0" netns="/var/run/netns/cni-d1d10849-37a3-1d78-01bd-b02a9b97828b" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.196 [INFO][5835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.196 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.242 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" HandleID="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Workload="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.242 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.242 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.669 [INFO][5845] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" HandleID="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Workload="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.669 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" HandleID="k8s-pod-network.6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Workload="localhost-k8s-calico--apiserver--5f56546f6c--x5lcg-eth0" Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.698 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:45:30.707508 containerd[1633]: 2025-10-13 05:45:30.703 [INFO][5835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879" Oct 13 05:45:30.714665 systemd[1]: run-netns-cni\x2dd1d10849\x2d37a3\x2d1d78\x2d01bd\x2db02a9b97828b.mount: Deactivated successfully. Oct 13 05:45:30.717588 containerd[1633]: time="2025-10-13T05:45:30.717274752Z" level=info msg="TearDown network for sandbox \"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\" successfully" Oct 13 05:45:30.717588 containerd[1633]: time="2025-10-13T05:45:30.717324105Z" level=info msg="StopPodSandbox for \"6aa718b525e38821efdda6c1b7659b89a07ee81370714a39879f622d2bef8879\" returns successfully" Oct 13 05:45:30.848177 kubelet[2828]: I1013 05:45:30.848090 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnfqm\" (UniqueName: \"kubernetes.io/projected/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-kube-api-access-tnfqm\") pod \"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd\" (UID: \"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd\") " Oct 13 05:45:30.848818 kubelet[2828]: I1013 05:45:30.848213 2828 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-calico-apiserver-certs\") pod \"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd\" (UID: \"3f1b517f-a7d8-41ac-a557-d8b6e064d2dd\") " Oct 13 05:45:30.851570 containerd[1633]: time="2025-10-13T05:45:30.851523893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6fac515eff233a1f50cef1cba08af029597245c1525475a5273ed0ec722e786\" id:\"e8adf6290dd61b65de7719c2f6db933da19bbe07aed88100dda4a55898834ade\" pid:5876 exited_at:{seconds:1760334330 nanos:851242269}" Oct 13 05:45:30.854475 kubelet[2828]: I1013 05:45:30.854387 2828 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "3f1b517f-a7d8-41ac-a557-d8b6e064d2dd" (UID: "3f1b517f-a7d8-41ac-a557-d8b6e064d2dd"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:45:30.854736 kubelet[2828]: I1013 05:45:30.854691 2828 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-kube-api-access-tnfqm" (OuterVolumeSpecName: "kube-api-access-tnfqm") pod "3f1b517f-a7d8-41ac-a557-d8b6e064d2dd" (UID: "3f1b517f-a7d8-41ac-a557-d8b6e064d2dd"). InnerVolumeSpecName "kube-api-access-tnfqm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:45:30.858275 systemd[1]: var-lib-kubelet-pods-3f1b517f\x2da7d8\x2d41ac\x2da557\x2dd8b6e064d2dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtnfqm.mount: Deactivated successfully. Oct 13 05:45:30.858393 systemd[1]: var-lib-kubelet-pods-3f1b517f\x2da7d8\x2d41ac\x2da557\x2dd8b6e064d2dd-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:45:30.893626 kubelet[2828]: I1013 05:45:30.893598 2828 scope.go:117] "RemoveContainer" containerID="2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70" Oct 13 05:45:30.895693 containerd[1633]: time="2025-10-13T05:45:30.895659053Z" level=info msg="RemoveContainer for \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\"" Oct 13 05:45:30.899381 systemd[1]: Removed slice kubepods-besteffort-pod3f1b517f_a7d8_41ac_a557_d8b6e064d2dd.slice - libcontainer container kubepods-besteffort-pod3f1b517f_a7d8_41ac_a557_d8b6e064d2dd.slice. Oct 13 05:45:30.899511 systemd[1]: kubepods-besteffort-pod3f1b517f_a7d8_41ac_a557_d8b6e064d2dd.slice: Consumed 1.351s CPU time, 44.2M memory peak. Oct 13 05:45:30.949404 kubelet[2828]: I1013 05:45:30.949362 2828 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Oct 13 05:45:30.949404 kubelet[2828]: I1013 05:45:30.949387 2828 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tnfqm\" (UniqueName: \"kubernetes.io/projected/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd-kube-api-access-tnfqm\") on node \"localhost\" DevicePath \"\"" Oct 13 05:45:31.295017 containerd[1633]: time="2025-10-13T05:45:31.294970298Z" level=info msg="RemoveContainer for \"2d84c5051793d42b93ef6e30a42632da21dddea88e6ca2d3d36d12bdad5d4a70\" returns successfully" Oct 13 05:45:32.621415 kubelet[2828]: I1013 05:45:32.621280 2828 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1b517f-a7d8-41ac-a557-d8b6e064d2dd" path="/var/lib/kubelet/pods/3f1b517f-a7d8-41ac-a557-d8b6e064d2dd/volumes" Oct 13 05:45:32.665127 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:49358.service - OpenSSH per-connection server daemon (10.0.0.1:49358). Oct 13 05:45:32.731070 sshd[5895]: Accepted publickey for core from 10.0.0.1 port 49358 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:32.733127 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:32.737630 systemd-logind[1609]: New session 20 of user core. Oct 13 05:45:32.745065 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:45:32.918072 sshd[5898]: Connection closed by 10.0.0.1 port 49358 Oct 13 05:45:32.918351 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:32.927753 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:49358.service: Deactivated successfully. Oct 13 05:45:32.930115 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:45:32.931502 systemd-logind[1609]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:45:32.934352 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:49362.service - OpenSSH per-connection server daemon (10.0.0.1:49362). Oct 13 05:45:32.935508 systemd-logind[1609]: Removed session 20. Oct 13 05:45:33.025270 sshd[5911]: Accepted publickey for core from 10.0.0.1 port 49362 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:33.027025 sshd-session[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:33.031856 systemd-logind[1609]: New session 21 of user core. Oct 13 05:45:33.039146 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:45:33.237994 sshd[5914]: Connection closed by 10.0.0.1 port 49362 Oct 13 05:45:33.236851 sshd-session[5911]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:33.247995 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:49362.service: Deactivated successfully. Oct 13 05:45:33.250003 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:45:33.251075 systemd-logind[1609]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:45:33.254182 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:49366.service - OpenSSH per-connection server daemon (10.0.0.1:49366). Oct 13 05:45:33.255055 systemd-logind[1609]: Removed session 21. Oct 13 05:45:33.321259 sshd[5925]: Accepted publickey for core from 10.0.0.1 port 49366 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:33.323146 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:33.328123 systemd-logind[1609]: New session 22 of user core. Oct 13 05:45:33.335112 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:45:34.097759 sshd[5928]: Connection closed by 10.0.0.1 port 49366 Oct 13 05:45:34.098195 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:34.112370 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:49366.service: Deactivated successfully. Oct 13 05:45:34.115783 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:45:34.118722 systemd-logind[1609]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:45:34.121312 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:49370.service - OpenSSH per-connection server daemon (10.0.0.1:49370). Oct 13 05:45:34.123762 systemd-logind[1609]: Removed session 22. Oct 13 05:45:34.210074 sshd[5947]: Accepted publickey for core from 10.0.0.1 port 49370 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:34.211445 sshd-session[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:34.216145 systemd-logind[1609]: New session 23 of user core. Oct 13 05:45:34.224071 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:45:34.636264 sshd[5950]: Connection closed by 10.0.0.1 port 49370 Oct 13 05:45:34.637190 sshd-session[5947]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:34.648173 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:49370.service: Deactivated successfully. Oct 13 05:45:34.650764 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:45:34.651665 systemd-logind[1609]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:45:34.655974 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:49380.service - OpenSSH per-connection server daemon (10.0.0.1:49380). Oct 13 05:45:34.656636 systemd-logind[1609]: Removed session 23. Oct 13 05:45:34.715434 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 49380 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:34.716734 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:34.721369 systemd-logind[1609]: New session 24 of user core. Oct 13 05:45:34.736068 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:45:34.857832 sshd[5965]: Connection closed by 10.0.0.1 port 49380 Oct 13 05:45:34.858198 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:34.863501 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:49380.service: Deactivated successfully. Oct 13 05:45:34.866039 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:45:34.866916 systemd-logind[1609]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:45:34.868830 systemd-logind[1609]: Removed session 24. Oct 13 05:45:35.618072 kubelet[2828]: E1013 05:45:35.618006 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:45:38.903126 containerd[1633]: time="2025-10-13T05:45:38.903077054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972\" id:\"e292f0795d0df4907ee6adc283b793c01b2fc75fd674f4db54f77d98fd12ab5b\" pid:5993 exited_at:{seconds:1760334338 nanos:902595170}" Oct 13 05:45:39.882005 systemd[1]: Started sshd@24-10.0.0.150:22-10.0.0.1:49566.service - OpenSSH per-connection server daemon (10.0.0.1:49566). Oct 13 05:45:39.943002 sshd[6007]: Accepted publickey for core from 10.0.0.1 port 49566 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:39.944274 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:39.948499 systemd-logind[1609]: New session 25 of user core. Oct 13 05:45:39.957058 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:45:40.069150 sshd[6010]: Connection closed by 10.0.0.1 port 49566 Oct 13 05:45:40.069462 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:40.074392 systemd[1]: sshd@24-10.0.0.150:22-10.0.0.1:49566.service: Deactivated successfully. Oct 13 05:45:40.076485 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:45:40.077399 systemd-logind[1609]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:45:40.078718 systemd-logind[1609]: Removed session 25. Oct 13 05:45:41.617611 kubelet[2828]: E1013 05:45:41.617555 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:45:43.617506 kubelet[2828]: E1013 05:45:43.617453 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:45:44.620553 kubelet[2828]: E1013 05:45:44.620516 2828 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:45:45.084312 systemd[1]: Started sshd@25-10.0.0.150:22-10.0.0.1:49580.service - OpenSSH per-connection server daemon (10.0.0.1:49580). Oct 13 05:45:45.169650 sshd[6029]: Accepted publickey for core from 10.0.0.1 port 49580 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:45.171611 sshd-session[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:45.176324 systemd-logind[1609]: New session 26 of user core. Oct 13 05:45:45.182075 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:45:45.365843 sshd[6032]: Connection closed by 10.0.0.1 port 49580 Oct 13 05:45:45.366077 sshd-session[6029]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:45.371113 systemd[1]: sshd@25-10.0.0.150:22-10.0.0.1:49580.service: Deactivated successfully. Oct 13 05:45:45.373381 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:45:45.374182 systemd-logind[1609]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:45:45.375704 systemd-logind[1609]: Removed session 26. Oct 13 05:45:45.725835 containerd[1633]: time="2025-10-13T05:45:45.725641549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb30d3d808a6be7f8c40ca2bf5a373698beaa3a041937ed1f2c1f86897a03972\" id:\"17816b1acd82251430d87ac413b5dba0bb3535ffeb2043b2bddbde19489e773a\" pid:6056 exited_at:{seconds:1760334345 nanos:725336742}" Oct 13 05:45:50.387144 systemd[1]: Started sshd@26-10.0.0.150:22-10.0.0.1:56372.service - OpenSSH per-connection server daemon (10.0.0.1:56372). Oct 13 05:45:50.466069 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 56372 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:50.467847 sshd-session[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:50.472681 systemd-logind[1609]: New session 27 of user core. Oct 13 05:45:50.480065 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 05:45:50.643383 sshd[6076]: Connection closed by 10.0.0.1 port 56372 Oct 13 05:45:50.645164 sshd-session[6073]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:50.650494 systemd[1]: sshd@26-10.0.0.150:22-10.0.0.1:56372.service: Deactivated successfully. Oct 13 05:45:50.652796 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 05:45:50.653707 systemd-logind[1609]: Session 27 logged out. Waiting for processes to exit. Oct 13 05:45:50.655137 systemd-logind[1609]: Removed session 27. Oct 13 05:45:51.821725 containerd[1633]: time="2025-10-13T05:45:51.821674803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9391ef3ffb51ee8a01b3275b965e0aad001f0cf5aef02fe07c52ed6de3b7a628\" id:\"fbfc53cb43fcb490e16d1a750a692c7cc4812804aa5e190c9d391c0f08ac84d2\" pid:6100 exited_at:{seconds:1760334351 nanos:821371128}" Oct 13 05:45:55.660069 systemd[1]: Started sshd@27-10.0.0.150:22-10.0.0.1:56386.service - OpenSSH per-connection server daemon (10.0.0.1:56386). Oct 13 05:45:55.730537 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 56386 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:45:55.734499 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:45:55.740785 systemd-logind[1609]: New session 28 of user core. Oct 13 05:45:55.750096 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 13 05:45:55.927586 sshd[6118]: Connection closed by 10.0.0.1 port 56386 Oct 13 05:45:55.928571 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:55.935398 systemd-logind[1609]: Session 28 logged out. Waiting for processes to exit. Oct 13 05:45:55.935754 systemd[1]: sshd@27-10.0.0.150:22-10.0.0.1:56386.service: Deactivated successfully. Oct 13 05:45:55.937959 systemd[1]: session-28.scope: Deactivated successfully. Oct 13 05:45:55.941426 systemd-logind[1609]: Removed session 28.