Sep 9 05:41:06.905335 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:41:06.905364 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:41:06.905378 kernel: BIOS-provided physical RAM map: Sep 9 05:41:06.905387 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 9 05:41:06.905395 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 9 05:41:06.905404 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 9 05:41:06.905414 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 9 05:41:06.905422 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 9 05:41:06.905430 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 9 05:41:06.905439 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 9 05:41:06.905448 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 9 05:41:06.905459 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 9 05:41:06.905468 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 9 05:41:06.905477 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 9 05:41:06.905488 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 9 05:41:06.905497 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 9 05:41:06.905509 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:41:06.905519 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:41:06.905528 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:41:06.905537 kernel: NX (Execute Disable) protection: active Sep 9 05:41:06.905546 kernel: APIC: Static calls initialized Sep 9 05:41:06.905555 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 9 05:41:06.905565 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 9 05:41:06.905574 kernel: extended physical RAM map: Sep 9 05:41:06.905584 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 9 05:41:06.905593 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 9 05:41:06.905602 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 9 05:41:06.905615 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 9 05:41:06.905643 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 9 05:41:06.905653 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 9 05:41:06.905662 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 9 05:41:06.905671 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 9 05:41:06.905680 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 9 05:41:06.905699 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 9 05:41:06.905709 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 9 05:41:06.905718 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 9 05:41:06.905727 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 9 05:41:06.905736 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 9 05:41:06.905749 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 9 05:41:06.905759 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 9 05:41:06.905773 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 9 05:41:06.905783 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:41:06.905793 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:41:06.905802 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:41:06.905815 kernel: efi: EFI v2.7 by EDK II Sep 9 05:41:06.905825 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 9 05:41:06.905835 kernel: random: crng init done Sep 9 05:41:06.905845 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 9 05:41:06.905855 kernel: secureboot: Secure boot enabled Sep 9 05:41:06.905865 kernel: SMBIOS 2.8 present. Sep 9 05:41:06.905876 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 05:41:06.905886 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:41:06.905897 kernel: Hypervisor detected: KVM Sep 9 05:41:06.905907 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:41:06.905917 kernel: kvm-clock: using sched offset of 5144815002 cycles Sep 9 05:41:06.905931 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:41:06.905942 kernel: tsc: Detected 2794.748 MHz processor Sep 9 05:41:06.905953 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:41:06.905964 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:41:06.905975 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 9 05:41:06.905986 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 05:41:06.905997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:41:06.906008 kernel: Using GB pages for direct mapping Sep 9 05:41:06.906019 kernel: ACPI: Early table checksum verification disabled Sep 9 05:41:06.906033 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 9 05:41:06.906043 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 05:41:06.906054 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:41:06.906065 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:41:06.906076 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 9 05:41:06.906087 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:41:06.906098 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:41:06.906109 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:41:06.906119 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:41:06.906133 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 05:41:06.906143 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 9 05:41:06.906154 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 9 05:41:06.906165 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 9 05:41:06.906176 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 9 05:41:06.906187 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 9 05:41:06.906197 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 9 05:41:06.906208 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 9 05:41:06.906219 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 9 05:41:06.906233 kernel: No NUMA configuration found Sep 9 05:41:06.906243 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 9 05:41:06.906254 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 9 05:41:06.906265 kernel: Zone ranges: Sep 9 05:41:06.906276 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:41:06.906287 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 9 05:41:06.906297 kernel: Normal empty Sep 9 05:41:06.906307 kernel: Device empty Sep 9 05:41:06.906317 kernel: Movable zone start for each node Sep 9 05:41:06.906329 kernel: Early memory node ranges Sep 9 05:41:06.906340 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 9 05:41:06.906351 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 9 05:41:06.906362 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 9 05:41:06.906373 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 9 05:41:06.906383 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 9 05:41:06.906394 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 9 05:41:06.906405 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:41:06.906415 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 9 05:41:06.906424 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 05:41:06.906437 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 05:41:06.906447 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 05:41:06.906457 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 9 05:41:06.906467 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 05:41:06.906477 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:41:06.906487 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:41:06.906497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 05:41:06.906507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:41:06.906516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:41:06.906529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:41:06.906539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:41:06.906549 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:41:06.906559 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 05:41:06.906569 kernel: TSC deadline timer available Sep 9 05:41:06.906578 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:41:06.906588 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:41:06.906598 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:41:06.906618 kernel: CPU topo: Max. threads per core: 1 Sep 9 05:41:06.906644 kernel: CPU topo: Num. cores per package: 4 Sep 9 05:41:06.906654 kernel: CPU topo: Num. threads per package: 4 Sep 9 05:41:06.906664 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 05:41:06.906678 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 05:41:06.906699 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 05:41:06.906709 kernel: kvm-guest: setup PV sched yield Sep 9 05:41:06.906720 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 05:41:06.906731 kernel: Booting paravirtualized kernel on KVM Sep 9 05:41:06.906745 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:41:06.906756 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 05:41:06.906766 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 05:41:06.906777 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 05:41:06.906786 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 05:41:06.906797 kernel: kvm-guest: PV spinlocks enabled Sep 9 05:41:06.906807 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:41:06.906818 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:41:06.906832 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:41:06.906843 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:41:06.906853 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:41:06.906863 kernel: Fallback order for Node 0: 0 Sep 9 05:41:06.906874 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 9 05:41:06.906884 kernel: Policy zone: DMA32 Sep 9 05:41:06.906895 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:41:06.906905 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 05:41:06.906915 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:41:06.906928 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:41:06.906938 kernel: Dynamic Preempt: voluntary Sep 9 05:41:06.906949 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:41:06.906960 kernel: rcu: RCU event tracing is enabled. Sep 9 05:41:06.906971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 05:41:06.906981 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:41:06.906992 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:41:06.907003 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:41:06.907013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:41:06.907027 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 05:41:06.907037 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:41:06.907048 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:41:06.907059 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:41:06.907069 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 05:41:06.907079 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:41:06.907090 kernel: Console: colour dummy device 80x25 Sep 9 05:41:06.907100 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:41:06.907111 kernel: ACPI: Core revision 20240827 Sep 9 05:41:06.907125 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 05:41:06.907136 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:41:06.907147 kernel: x2apic enabled Sep 9 05:41:06.907158 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:41:06.907169 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 05:41:06.907180 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 05:41:06.907192 kernel: kvm-guest: setup PV IPIs Sep 9 05:41:06.907203 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 05:41:06.907214 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 05:41:06.907228 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 05:41:06.907238 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 05:41:06.907249 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 05:41:06.907260 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 05:41:06.907271 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:41:06.907282 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:41:06.907293 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:41:06.907304 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 05:41:06.907315 kernel: active return thunk: retbleed_return_thunk Sep 9 05:41:06.907329 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 05:41:06.907340 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:41:06.907351 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:41:06.907362 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 05:41:06.907374 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 05:41:06.907384 kernel: active return thunk: srso_return_thunk Sep 9 05:41:06.907394 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 05:41:06.907405 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:41:06.907418 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:41:06.907428 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:41:06.907438 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:41:06.907449 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 05:41:06.907460 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:41:06.907470 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:41:06.907480 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:41:06.907490 kernel: landlock: Up and running. Sep 9 05:41:06.907500 kernel: SELinux: Initializing. Sep 9 05:41:06.907514 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:41:06.907524 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:41:06.907535 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 05:41:06.907545 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 05:41:06.907555 kernel: ... version: 0 Sep 9 05:41:06.907565 kernel: ... bit width: 48 Sep 9 05:41:06.907575 kernel: ... generic registers: 6 Sep 9 05:41:06.907586 kernel: ... value mask: 0000ffffffffffff Sep 9 05:41:06.907596 kernel: ... max period: 00007fffffffffff Sep 9 05:41:06.907609 kernel: ... fixed-purpose events: 0 Sep 9 05:41:06.907634 kernel: ... event mask: 000000000000003f Sep 9 05:41:06.907646 kernel: signal: max sigframe size: 1776 Sep 9 05:41:06.907656 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:41:06.907667 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:41:06.907678 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:41:06.907699 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:41:06.907710 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:41:06.907720 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 05:41:06.907731 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 05:41:06.907745 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 05:41:06.907756 kernel: Memory: 2409224K/2552216K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 137064K reserved, 0K cma-reserved) Sep 9 05:41:06.907766 kernel: devtmpfs: initialized Sep 9 05:41:06.907777 kernel: x86/mm: Memory block size: 128MB Sep 9 05:41:06.907788 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 9 05:41:06.907798 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 9 05:41:06.907809 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:41:06.907820 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 05:41:06.907834 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:41:06.907844 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:41:06.907855 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:41:06.907866 kernel: audit: type=2000 audit(1757396465.210:1): state=initialized audit_enabled=0 res=1 Sep 9 05:41:06.907877 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:41:06.907888 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:41:06.907898 kernel: cpuidle: using governor menu Sep 9 05:41:06.907909 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:41:06.907919 kernel: dca service started, version 1.12.1 Sep 9 05:41:06.907933 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 05:41:06.907943 kernel: PCI: Using configuration type 1 for base access Sep 9 05:41:06.907954 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:41:06.907965 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:41:06.907975 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:41:06.907986 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:41:06.907996 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:41:06.908007 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:41:06.908017 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:41:06.908030 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:41:06.908040 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:41:06.908051 kernel: ACPI: Interpreter enabled Sep 9 05:41:06.908062 kernel: ACPI: PM: (supports S0 S5) Sep 9 05:41:06.908072 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:41:06.908083 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:41:06.908094 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 05:41:06.908104 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 05:41:06.908114 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:41:06.908333 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:41:06.908482 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 05:41:06.908639 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 05:41:06.908654 kernel: PCI host bridge to bus 0000:00 Sep 9 05:41:06.908823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:41:06.908968 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:41:06.909114 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:41:06.909256 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 05:41:06.909398 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 05:41:06.909536 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:41:06.909709 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:41:06.909959 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:41:06.910126 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 05:41:06.910283 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 05:41:06.910438 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 05:41:06.910592 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 05:41:06.910774 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 05:41:06.910969 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:41:06.911185 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 05:41:06.911342 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 05:41:06.911501 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 05:41:06.911758 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:41:06.911885 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 05:41:06.912003 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 05:41:06.912120 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 05:41:06.912246 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:41:06.912409 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 05:41:06.912571 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 05:41:06.912817 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 05:41:06.912998 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 05:41:06.913227 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:41:06.913389 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 05:41:06.913536 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 05:41:06.913729 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 05:41:06.913875 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 05:41:06.914029 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 05:41:06.914156 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 05:41:06.914167 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:41:06.914175 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:41:06.914184 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:41:06.914196 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:41:06.914204 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 05:41:06.914212 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 05:41:06.914220 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 05:41:06.914228 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 05:41:06.914236 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 05:41:06.914244 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 05:41:06.914252 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 05:41:06.914260 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 05:41:06.914271 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 05:41:06.914279 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 05:41:06.914287 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 05:41:06.914295 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 05:41:06.914303 kernel: iommu: Default domain type: Translated Sep 9 05:41:06.914311 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:41:06.914318 kernel: efivars: Registered efivars operations Sep 9 05:41:06.914326 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:41:06.914335 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:41:06.914343 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 9 05:41:06.914353 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 9 05:41:06.914361 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 9 05:41:06.914369 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 9 05:41:06.914377 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 9 05:41:06.914494 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 05:41:06.914636 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 05:41:06.914766 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 05:41:06.914777 kernel: vgaarb: loaded Sep 9 05:41:06.914788 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 05:41:06.914797 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 05:41:06.914805 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:41:06.914813 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:41:06.914821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:41:06.914829 kernel: pnp: PnP ACPI init Sep 9 05:41:06.915004 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 05:41:06.915027 kernel: pnp: PnP ACPI: found 6 devices Sep 9 05:41:06.915045 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:41:06.915056 kernel: NET: Registered PF_INET protocol family Sep 9 05:41:06.915066 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:41:06.915083 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:41:06.915094 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:41:06.915105 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:41:06.915115 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:41:06.915125 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:41:06.915136 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:41:06.915149 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:41:06.915160 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:41:06.915170 kernel: NET: Registered PF_XDP protocol family Sep 9 05:41:06.915336 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 05:41:06.915490 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 05:41:06.915643 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:41:06.915770 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:41:06.915904 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:41:06.916065 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 05:41:06.916213 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 05:41:06.916351 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:41:06.916364 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:41:06.916382 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 05:41:06.916391 kernel: Initialise system trusted keyrings Sep 9 05:41:06.916402 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:41:06.916414 kernel: Key type asymmetric registered Sep 9 05:41:06.916426 kernel: Asymmetric key parser 'x509' registered Sep 9 05:41:06.916458 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:41:06.916472 kernel: io scheduler mq-deadline registered Sep 9 05:41:06.916484 kernel: io scheduler kyber registered Sep 9 05:41:06.916498 kernel: io scheduler bfq registered Sep 9 05:41:06.916509 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:41:06.916527 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 05:41:06.916538 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 05:41:06.916549 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 05:41:06.916563 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:41:06.916576 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:41:06.916588 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:41:06.916599 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:41:06.916609 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:41:06.916768 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 05:41:06.916784 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 9 05:41:06.916918 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 05:41:06.917066 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T05:41:06 UTC (1757396466) Sep 9 05:41:06.917210 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 05:41:06.917225 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 05:41:06.917237 kernel: efifb: probing for efifb Sep 9 05:41:06.917249 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 05:41:06.917260 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 05:41:06.917271 kernel: efifb: scrolling: redraw Sep 9 05:41:06.917282 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 05:41:06.917293 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 05:41:06.917307 kernel: fb0: EFI VGA frame buffer device Sep 9 05:41:06.917318 kernel: pstore: Using crash dump compression: deflate Sep 9 05:41:06.917326 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:41:06.917334 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:41:06.917343 kernel: Segment Routing with IPv6 Sep 9 05:41:06.917351 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:41:06.917361 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:41:06.917369 kernel: Key type dns_resolver registered Sep 9 05:41:06.917378 kernel: IPI shorthand broadcast: enabled Sep 9 05:41:06.917386 kernel: sched_clock: Marking stable (3134002674, 137545661)->(3292395934, -20847599) Sep 9 05:41:06.917394 kernel: registered taskstats version 1 Sep 9 05:41:06.917402 kernel: Loading compiled-in X.509 certificates Sep 9 05:41:06.917411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:41:06.917421 kernel: Demotion targets for Node 0: null Sep 9 05:41:06.917432 kernel: Key type .fscrypt registered Sep 9 05:41:06.917446 kernel: Key type fscrypt-provisioning registered Sep 9 05:41:06.917457 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:41:06.917467 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:41:06.917478 kernel: ima: No architecture policies found Sep 9 05:41:06.917489 kernel: clk: Disabling unused clocks Sep 9 05:41:06.917500 kernel: Warning: unable to open an initial console. Sep 9 05:41:06.917512 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:41:06.917523 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:41:06.917535 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:41:06.917549 kernel: Run /init as init process Sep 9 05:41:06.917560 kernel: with arguments: Sep 9 05:41:06.917570 kernel: /init Sep 9 05:41:06.917579 kernel: with environment: Sep 9 05:41:06.917587 kernel: HOME=/ Sep 9 05:41:06.917595 kernel: TERM=linux Sep 9 05:41:06.917603 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:41:06.917613 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:41:06.917643 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:41:06.917653 systemd[1]: Detected virtualization kvm. Sep 9 05:41:06.917661 systemd[1]: Detected architecture x86-64. Sep 9 05:41:06.917670 systemd[1]: Running in initrd. Sep 9 05:41:06.917678 systemd[1]: No hostname configured, using default hostname. Sep 9 05:41:06.917696 systemd[1]: Hostname set to . Sep 9 05:41:06.917704 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:41:06.917713 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:41:06.917725 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:41:06.917734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:41:06.917744 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:41:06.917753 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:41:06.917762 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:41:06.917773 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:41:06.917790 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:41:06.917802 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:41:06.917814 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:41:06.917826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:41:06.917835 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:41:06.917844 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:41:06.917853 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:41:06.917861 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:41:06.917872 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:41:06.917887 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:41:06.917899 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:41:06.917911 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:41:06.917921 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:41:06.917930 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:41:06.917939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:41:06.917950 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:41:06.917962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:41:06.917977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:41:06.917989 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:41:06.918001 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:41:06.918013 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:41:06.918024 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:41:06.918036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:41:06.918045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:06.918054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:41:06.918065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:41:06.918074 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:41:06.918108 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 05:41:06.918132 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:41:06.918142 systemd-journald[220]: Journal started Sep 9 05:41:06.918161 systemd-journald[220]: Runtime Journal (/run/log/journal/3fccd32e33fd4164ae8acd9ef173ae14) is 6M, max 48.2M, 42.2M free. Sep 9 05:41:06.920283 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:41:06.913202 systemd-modules-load[222]: Inserted module 'overlay' Sep 9 05:41:06.922757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:41:06.924400 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:41:06.934876 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:41:06.938569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:06.939793 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:41:06.946055 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:41:06.948792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:41:06.951053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:41:06.957661 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:41:06.960881 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 9 05:41:06.961977 kernel: Bridge firewalling registered Sep 9 05:41:06.963931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:41:06.965565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:41:06.980553 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:41:06.984562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:41:06.987290 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:41:07.005580 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:41:07.042419 systemd-resolved[260]: Positive Trust Anchors: Sep 9 05:41:07.042435 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:41:07.042465 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:41:07.045047 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 9 05:41:07.053609 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:41:07.051795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:41:07.053336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:41:07.175695 kernel: SCSI subsystem initialized Sep 9 05:41:07.184658 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:41:07.196690 kernel: iscsi: registered transport (tcp) Sep 9 05:41:07.219713 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:41:07.219781 kernel: QLogic iSCSI HBA Driver Sep 9 05:41:07.243789 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:41:07.264553 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:41:07.269087 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:41:07.379511 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:41:07.381983 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:41:07.444678 kernel: raid6: avx2x4 gen() 22122 MB/s Sep 9 05:41:07.461658 kernel: raid6: avx2x2 gen() 24017 MB/s Sep 9 05:41:07.478954 kernel: raid6: avx2x1 gen() 18793 MB/s Sep 9 05:41:07.478979 kernel: raid6: using algorithm avx2x2 gen() 24017 MB/s Sep 9 05:41:07.496976 kernel: raid6: .... xor() 14824 MB/s, rmw enabled Sep 9 05:41:07.497050 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:41:07.521663 kernel: xor: automatically using best checksumming function avx Sep 9 05:41:07.720688 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:41:07.730275 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:41:07.733311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:41:07.779945 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 9 05:41:07.789381 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:41:07.790858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:41:07.820786 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Sep 9 05:41:07.849654 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:41:07.852236 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:41:07.951781 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:41:07.955023 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:41:07.988664 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 05:41:07.993553 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 05:41:07.997645 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 05:41:07.997680 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:41:08.005662 kernel: libata version 3.00 loaded. Sep 9 05:41:08.026407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:41:08.053761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:08.086435 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:08.089936 kernel: AES CTR mode by8 optimization enabled Sep 9 05:41:08.089975 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:41:08.091854 kernel: GPT:9289727 != 19775487 Sep 9 05:41:08.091881 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:41:08.091897 kernel: GPT:9289727 != 19775487 Sep 9 05:41:08.091944 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:08.094207 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:41:08.094234 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:41:08.096725 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:41:08.102820 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 05:41:08.103013 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 05:41:08.105083 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 05:41:08.105244 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 05:41:08.105386 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 05:41:08.110731 kernel: scsi host0: ahci Sep 9 05:41:08.116673 kernel: scsi host1: ahci Sep 9 05:41:08.122923 kernel: scsi host2: ahci Sep 9 05:41:08.124224 kernel: scsi host3: ahci Sep 9 05:41:08.124406 kernel: scsi host4: ahci Sep 9 05:41:08.131335 kernel: scsi host5: ahci Sep 9 05:41:08.131790 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 05:41:08.131808 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 05:41:08.131822 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 05:41:08.131836 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 05:41:08.131849 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 05:41:08.131862 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 05:41:08.144462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:08.167369 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:41:08.177245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:41:08.184692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:41:08.184978 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:41:08.197599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:41:08.199147 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:41:08.226595 disk-uuid[633]: Primary Header is updated. Sep 9 05:41:08.226595 disk-uuid[633]: Secondary Entries is updated. Sep 9 05:41:08.226595 disk-uuid[633]: Secondary Header is updated. Sep 9 05:41:08.231667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:41:08.235656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:41:08.442727 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 05:41:08.445288 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 05:41:08.445379 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 05:41:08.445398 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 05:41:08.445411 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 05:41:08.446673 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 05:41:08.447668 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:41:08.448991 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 05:41:08.449014 kernel: ata3.00: applying bridge limits Sep 9 05:41:08.449903 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:41:08.449924 kernel: ata3.00: configured for UDMA/100 Sep 9 05:41:08.452658 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 05:41:08.508681 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 05:41:08.509085 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 05:41:08.522806 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 05:41:08.863891 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:41:08.865844 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:41:08.867463 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:41:08.868763 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:41:08.872237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:41:08.909247 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:41:09.280678 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:41:09.280959 disk-uuid[634]: The operation has completed successfully. Sep 9 05:41:09.315298 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:41:09.315422 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:41:09.344844 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:41:09.374596 sh[663]: Success Sep 9 05:41:09.395184 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:41:09.395232 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:41:09.396305 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:41:09.405671 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 05:41:09.434810 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:41:09.439144 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:41:09.459089 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:41:09.464675 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (675) Sep 9 05:41:09.464707 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:41:09.466637 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:09.470924 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:41:09.470942 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:41:09.472183 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:41:09.474458 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:41:09.476678 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:41:09.479359 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:41:09.482097 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:41:09.510220 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (707) Sep 9 05:41:09.510266 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:09.510277 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:09.514055 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:41:09.514111 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:41:09.519648 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:09.520724 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:41:09.522487 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:41:09.704726 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:41:09.710971 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:41:09.716186 ignition[750]: Ignition 2.22.0 Sep 9 05:41:09.716203 ignition[750]: Stage: fetch-offline Sep 9 05:41:09.716256 ignition[750]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:09.716266 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:41:09.716382 ignition[750]: parsed url from cmdline: "" Sep 9 05:41:09.716386 ignition[750]: no config URL provided Sep 9 05:41:09.716391 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:41:09.716399 ignition[750]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:41:09.716424 ignition[750]: op(1): [started] loading QEMU firmware config module Sep 9 05:41:09.716429 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 05:41:09.726278 ignition[750]: op(1): [finished] loading QEMU firmware config module Sep 9 05:41:09.763899 ignition[750]: parsing config with SHA512: b554bb1d7291b91c2445f5894b2b21e4861b05ecb072e0850c49e5715d3c8ea928ee361c53390f98fcf5c273b21b815dca6b028517b68efe3572e8bf58319459 Sep 9 05:41:09.768000 systemd-networkd[851]: lo: Link UP Sep 9 05:41:09.768013 systemd-networkd[851]: lo: Gained carrier Sep 9 05:41:09.769833 systemd-networkd[851]: Enumeration completed Sep 9 05:41:09.770202 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:09.770206 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:41:09.770738 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:41:09.771632 systemd[1]: Reached target network.target - Network. Sep 9 05:41:09.772736 systemd-networkd[851]: eth0: Link UP Sep 9 05:41:09.772894 systemd-networkd[851]: eth0: Gained carrier Sep 9 05:41:09.772903 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:09.782360 unknown[750]: fetched base config from "system" Sep 9 05:41:09.782370 unknown[750]: fetched user config from "qemu" Sep 9 05:41:09.784146 ignition[750]: fetch-offline: fetch-offline passed Sep 9 05:41:09.784943 ignition[750]: Ignition finished successfully Sep 9 05:41:09.786670 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:41:09.789513 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:41:09.790117 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 05:41:09.791143 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:41:09.898872 ignition[858]: Ignition 2.22.0 Sep 9 05:41:09.898885 ignition[858]: Stage: kargs Sep 9 05:41:09.899023 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:09.899034 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:41:09.899774 ignition[858]: kargs: kargs passed Sep 9 05:41:09.899816 ignition[858]: Ignition finished successfully Sep 9 05:41:09.907169 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:41:09.909112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:41:09.938719 ignition[865]: Ignition 2.22.0 Sep 9 05:41:09.938733 ignition[865]: Stage: disks Sep 9 05:41:09.938864 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:09.938874 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:41:09.939559 ignition[865]: disks: disks passed Sep 9 05:41:09.939600 ignition[865]: Ignition finished successfully Sep 9 05:41:09.946085 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:41:09.948254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:41:09.948912 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:41:09.950848 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:41:09.951195 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:41:09.954837 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:41:09.957683 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:41:09.989797 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:41:09.997340 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:41:09.999170 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:41:10.150665 kernel: EXT4-fs (vda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:41:10.151848 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:41:10.154496 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:41:10.158733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:41:10.162117 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:41:10.164472 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:41:10.164540 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:41:10.166668 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:41:10.177442 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:41:10.181444 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:41:10.187010 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Sep 9 05:41:10.187046 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:10.187061 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:10.191905 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:41:10.191990 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:41:10.194897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:41:10.230146 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:41:10.235960 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:41:10.241227 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:41:10.246170 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:41:10.353760 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:41:10.355383 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:41:10.357876 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:41:10.385649 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:10.399607 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:41:10.422036 ignition[997]: INFO : Ignition 2.22.0 Sep 9 05:41:10.422036 ignition[997]: INFO : Stage: mount Sep 9 05:41:10.424043 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:10.424043 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:41:10.424043 ignition[997]: INFO : mount: mount passed Sep 9 05:41:10.424043 ignition[997]: INFO : Ignition finished successfully Sep 9 05:41:10.430340 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:41:10.432417 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:41:10.464961 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:41:10.467079 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:41:10.500120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Sep 9 05:41:10.500152 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:41:10.500163 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:41:10.504640 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:41:10.504657 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:41:10.506889 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:41:10.541248 ignition[1026]: INFO : Ignition 2.22.0 Sep 9 05:41:10.541248 ignition[1026]: INFO : Stage: files Sep 9 05:41:10.543014 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:10.543014 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:41:10.543014 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:41:10.546888 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:41:10.546888 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:41:10.550008 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:41:10.551469 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:41:10.553269 unknown[1026]: wrote ssh authorized keys file for user: core Sep 9 05:41:10.554413 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:41:10.555728 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 05:41:10.555728 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 05:41:10.810864 systemd-networkd[851]: eth0: Gained IPv6LL Sep 9 05:41:10.906886 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:41:11.041714 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 05:41:11.043729 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:41:11.045467 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:41:11.045467 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:41:11.048750 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:41:11.048750 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:41:11.052104 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:41:11.053768 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:41:11.055556 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:41:11.289539 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:41:11.291457 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:41:11.293180 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:41:11.451938 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:41:11.454401 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:41:11.456384 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 05:41:11.925331 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 05:41:12.608393 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 05:41:12.608393 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 05:41:12.619515 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:41:12.913864 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:41:12.913864 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 05:41:12.913864 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 05:41:12.918956 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:41:12.918956 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:41:12.918956 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 05:41:12.918956 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 05:41:12.945465 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:41:12.952861 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:41:12.954555 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 05:41:12.954555 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:41:12.954555 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:41:12.954555 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:41:12.954555 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:41:12.954555 ignition[1026]: INFO : files: files passed Sep 9 05:41:12.954555 ignition[1026]: INFO : Ignition finished successfully Sep 9 05:41:12.967319 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:41:12.970091 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:41:12.972578 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:41:12.993684 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:41:12.993854 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:41:12.998067 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 05:41:13.002492 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:41:13.002492 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:41:13.006070 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:41:13.005450 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:41:13.007181 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:41:13.015408 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:41:13.085961 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:41:13.086157 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:41:13.087103 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:41:13.091642 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:41:13.092221 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:41:13.096097 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:41:13.139412 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:41:13.142050 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:41:13.167788 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:41:13.168403 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:41:13.168989 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:41:13.173206 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:41:13.173372 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:41:13.176544 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:41:13.177100 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:41:13.179996 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:41:13.180302 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:41:13.180644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:41:13.180951 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:41:13.181276 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:41:13.181595 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:41:13.181955 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:41:13.182274 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:41:13.182595 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:41:13.182876 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:41:13.183031 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:41:13.200526 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:41:13.201378 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:41:13.203175 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:41:13.205397 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:41:13.207603 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:41:13.207762 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:41:13.208534 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:41:13.208676 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:41:13.209174 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:41:13.209441 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:41:13.219720 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:41:13.220066 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:41:13.225050 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:41:13.225436 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:41:13.225575 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:41:13.227237 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:41:13.227346 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:41:13.229127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:41:13.229258 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:41:13.231076 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:41:13.231189 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:41:13.235553 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:41:13.238507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:41:13.240098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:41:13.240274 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:41:13.240809 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:41:13.240971 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:41:13.250011 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:41:13.250159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:41:13.274738 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:41:13.326416 ignition[1081]: INFO : Ignition 2.22.0 Sep 9 05:41:13.326416 ignition[1081]: INFO : Stage: umount Sep 9 05:41:13.328450 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:41:13.328450 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:41:13.328450 ignition[1081]: INFO : umount: umount passed Sep 9 05:41:13.328450 ignition[1081]: INFO : Ignition finished successfully Sep 9 05:41:13.333521 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:41:13.333711 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:41:13.334434 systemd[1]: Stopped target network.target - Network. Sep 9 05:41:13.337305 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:41:13.337379 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:41:13.339311 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:41:13.339375 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:41:13.341322 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:41:13.341394 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:41:13.343582 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:41:13.343650 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:41:13.346730 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:41:13.347294 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:41:13.355887 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:41:13.356074 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:41:13.360683 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:41:13.360982 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:41:13.361107 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:41:13.365177 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:41:13.366184 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:41:13.374415 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:41:13.374471 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:41:13.375714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:41:13.377083 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:41:13.377135 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:41:13.377479 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:41:13.377534 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:41:13.382400 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:41:13.382462 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:41:13.383145 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:41:13.383198 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:41:13.387550 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:41:13.390102 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:41:13.390174 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:41:13.411608 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:41:13.411839 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:41:13.413585 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:41:13.413645 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:41:13.416246 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:41:13.416292 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:41:13.416569 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:41:13.416646 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:41:13.422011 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:41:13.422065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:41:13.422814 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:41:13.422862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:41:13.428636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:41:13.429127 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:41:13.429189 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:41:13.435138 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:41:13.435216 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:41:13.444579 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 05:41:13.444669 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:41:13.448680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:41:13.448739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:41:13.451222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:41:13.451269 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:13.458780 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:41:13.458856 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 05:41:13.458900 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:41:13.458948 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:41:13.459419 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:41:13.459546 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:41:13.461272 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:41:13.461382 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:41:13.463550 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:41:13.463675 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:41:13.468907 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:41:13.469434 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:41:13.469503 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:41:13.471140 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:41:13.500444 systemd[1]: Switching root. Sep 9 05:41:13.537986 systemd-journald[220]: Journal stopped Sep 9 05:41:15.037261 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 05:41:15.037327 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:41:15.037341 kernel: SELinux: policy capability open_perms=1 Sep 9 05:41:15.037452 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:41:15.037476 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:41:15.037502 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:41:15.037517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:41:15.037534 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:41:15.037548 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:41:15.037567 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:41:15.037582 kernel: audit: type=1403 audit(1757396474.138:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:41:15.037602 systemd[1]: Successfully loaded SELinux policy in 67.818ms. Sep 9 05:41:15.037645 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.209ms. Sep 9 05:41:15.037664 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:41:15.037680 systemd[1]: Detected virtualization kvm. Sep 9 05:41:15.037695 systemd[1]: Detected architecture x86-64. Sep 9 05:41:15.037713 systemd[1]: Detected first boot. Sep 9 05:41:15.037732 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:41:15.037754 zram_generator::config[1126]: No configuration found. Sep 9 05:41:15.037771 kernel: Guest personality initialized and is inactive Sep 9 05:41:15.037786 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:41:15.037800 kernel: Initialized host personality Sep 9 05:41:15.037814 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:41:15.037826 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:41:15.037850 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:41:15.037865 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:41:15.037877 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:41:15.037890 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:41:15.037906 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:41:15.037922 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:41:15.037937 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:41:15.037952 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:41:15.037968 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:41:15.037987 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:41:15.038003 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:41:15.038018 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:41:15.038034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:41:15.038050 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:41:15.038065 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:41:15.038087 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:41:15.038103 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:41:15.038122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:41:15.038137 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:41:15.038152 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:41:15.038168 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:41:15.038194 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:41:15.038210 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:41:15.038225 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:41:15.038241 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:41:15.038260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:41:15.038281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:41:15.038296 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:41:15.038312 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:41:15.038328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:41:15.038343 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:41:15.038359 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:41:15.038375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:41:15.038391 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:41:15.038407 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:41:15.038426 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:41:15.038442 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:41:15.038459 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:41:15.038486 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:41:15.038502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:15.038518 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:41:15.038533 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:41:15.038559 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:41:15.038578 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:41:15.038594 systemd[1]: Reached target machines.target - Containers. Sep 9 05:41:15.038609 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:41:15.038643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:41:15.038660 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:41:15.038675 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:41:15.038690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:41:15.038705 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:41:15.038721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:41:15.038739 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:41:15.038755 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:41:15.038771 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:41:15.038787 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:41:15.038802 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:41:15.038817 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:41:15.038833 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:41:15.038849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:41:15.038867 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:41:15.038882 kernel: ACPI: bus type drm_connector registered Sep 9 05:41:15.038897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:41:15.038921 kernel: loop: module loaded Sep 9 05:41:15.038941 kernel: fuse: init (API version 7.41) Sep 9 05:41:15.038956 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:41:15.038996 systemd-journald[1197]: Collecting audit messages is disabled. Sep 9 05:41:15.039025 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:41:15.039043 systemd-journald[1197]: Journal started Sep 9 05:41:15.039072 systemd-journald[1197]: Runtime Journal (/run/log/journal/3fccd32e33fd4164ae8acd9ef173ae14) is 6M, max 48.2M, 42.2M free. Sep 9 05:41:14.716113 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:41:14.736750 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:41:14.737352 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:41:15.043103 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:41:15.046754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:41:15.048836 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:41:15.048863 systemd[1]: Stopped verity-setup.service. Sep 9 05:41:15.051638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:15.056298 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:41:15.056967 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:41:15.058357 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:41:15.059989 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:41:15.062491 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:41:15.063913 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:41:15.065415 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:41:15.066771 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:41:15.068687 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:41:15.068925 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:41:15.070571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:41:15.070813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:41:15.072433 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:41:15.072674 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:41:15.074267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:41:15.074529 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:41:15.076253 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:41:15.076542 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:41:15.077988 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:41:15.078213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:41:15.079807 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:41:15.081377 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:41:15.083192 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:41:15.085337 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:41:15.100983 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:41:15.103929 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:41:15.106218 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:41:15.107515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:41:15.107610 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:41:15.118893 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:41:15.130775 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:41:15.141398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:41:15.143017 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:41:15.146984 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:41:15.148698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:41:15.154804 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:41:15.155974 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:41:15.157182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:41:15.159578 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:41:15.162740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:41:15.165760 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:41:15.167164 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:41:15.168520 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:41:15.177884 systemd-journald[1197]: Time spent on flushing to /var/log/journal/3fccd32e33fd4164ae8acd9ef173ae14 is 20.041ms for 1045 entries. Sep 9 05:41:15.177884 systemd-journald[1197]: System Journal (/var/log/journal/3fccd32e33fd4164ae8acd9ef173ae14) is 8M, max 195.6M, 187.6M free. Sep 9 05:41:15.658142 systemd-journald[1197]: Received client request to flush runtime journal. Sep 9 05:41:15.658184 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 05:41:15.658198 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:41:15.658211 kernel: loop1: detected capacity change from 0 to 110984 Sep 9 05:41:15.658223 kernel: loop2: detected capacity change from 0 to 229808 Sep 9 05:41:15.658241 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 05:41:15.299028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:41:15.303540 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 9 05:41:15.303553 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 9 05:41:15.307730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:41:15.640297 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:41:15.644734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:41:15.659519 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:41:15.663658 kernel: loop4: detected capacity change from 0 to 110984 Sep 9 05:41:15.687647 kernel: loop5: detected capacity change from 0 to 229808 Sep 9 05:41:15.697044 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 05:41:15.697609 (sd-merge)[1260]: Merged extensions into '/usr'. Sep 9 05:41:15.702114 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:41:15.702139 systemd[1]: Reloading... Sep 9 05:41:15.775648 zram_generator::config[1291]: No configuration found. Sep 9 05:41:15.893635 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:41:15.973614 systemd[1]: Reloading finished in 270 ms. Sep 9 05:41:16.007243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:41:16.129961 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:41:16.131843 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:41:16.133499 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:41:16.139733 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:41:16.150911 systemd[1]: Starting ensure-sysext.service... Sep 9 05:41:16.152824 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:41:16.155361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:41:16.159726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:41:16.180876 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:41:16.181317 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:41:16.181658 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:41:16.181912 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:41:16.181922 systemd[1]: Reload requested from client PID 1330 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:41:16.181933 systemd[1]: Reloading... Sep 9 05:41:16.183188 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:41:16.183533 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 9 05:41:16.183702 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 9 05:41:16.185387 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 9 05:41:16.185770 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 9 05:41:16.187916 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:41:16.187931 systemd-tmpfiles[1333]: Skipping /boot Sep 9 05:41:16.198397 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:41:16.198514 systemd-tmpfiles[1333]: Skipping /boot Sep 9 05:41:16.245726 zram_generator::config[1369]: No configuration found. Sep 9 05:41:16.537505 systemd[1]: Reloading finished in 355 ms. Sep 9 05:41:16.553900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:41:16.579939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:41:16.589609 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:41:16.602646 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:41:16.606798 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:41:16.609922 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:41:16.612104 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:41:16.615518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:16.615710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:41:16.616896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:41:16.620909 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:41:16.623942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:41:16.625124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:41:16.625233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:41:16.625334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:16.631605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:16.631863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:41:16.632094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:41:16.632249 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:41:16.635435 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:41:16.637004 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:16.638864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:41:16.639162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:41:16.641086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:41:16.641357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:41:16.643320 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:41:16.643613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:41:16.668220 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:41:16.676235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:16.676486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:41:16.677865 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:41:16.680406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:41:16.683814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:41:16.699347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:41:16.700904 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:41:16.701019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:41:16.701162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:41:16.705458 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:41:16.705718 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:41:16.709223 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:41:16.711045 systemd[1]: Finished ensure-sysext.service. Sep 9 05:41:16.712309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:41:16.717009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:41:16.720594 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:41:16.722445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:41:16.722855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:41:16.724753 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:41:16.725015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:41:16.754573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:41:16.754691 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:41:16.757170 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:41:16.880548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:41:16.881945 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:41:16.895092 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:41:16.896636 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:41:17.108932 systemd-resolved[1406]: Positive Trust Anchors: Sep 9 05:41:17.108957 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:41:17.108987 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:41:17.122006 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:41:17.154482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:41:17.157576 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:41:17.204389 systemd-udevd[1451]: Using default interface naming scheme 'v255'. Sep 9 05:41:17.231926 systemd-resolved[1406]: Defaulting to hostname 'linux'. Sep 9 05:41:17.233272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:41:17.235059 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:41:17.236746 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:41:17.238286 augenrules[1455]: No rules Sep 9 05:41:17.239953 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:41:17.240244 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:41:17.288877 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:41:17.290789 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:41:17.291990 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:41:17.293373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:41:17.295055 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:41:17.298069 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:41:17.299240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:41:17.300498 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:41:17.301941 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:41:17.301980 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:41:17.303104 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:41:17.305364 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:41:17.308249 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:41:17.313498 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:41:17.315945 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:41:17.317377 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:41:17.327495 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:41:17.328979 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:41:17.332587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:41:17.334243 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:41:17.340664 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:41:17.341649 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:41:17.342609 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:41:17.342656 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:41:17.345762 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:41:17.351954 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:41:17.357773 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:41:17.364702 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:41:17.365934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:41:17.368905 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:41:17.370978 jq[1494]: false Sep 9 05:41:17.372957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:41:17.382183 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:41:17.386526 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Refreshing passwd entry cache Sep 9 05:41:17.386902 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:41:17.387117 oslogin_cache_refresh[1496]: Refreshing passwd entry cache Sep 9 05:41:17.389822 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Failure getting users, quitting Sep 9 05:41:17.389869 oslogin_cache_refresh[1496]: Failure getting users, quitting Sep 9 05:41:17.389945 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:41:17.389983 oslogin_cache_refresh[1496]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:41:17.390066 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Refreshing group entry cache Sep 9 05:41:17.390100 oslogin_cache_refresh[1496]: Refreshing group entry cache Sep 9 05:41:17.390574 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Failure getting groups, quitting Sep 9 05:41:17.390614 oslogin_cache_refresh[1496]: Failure getting groups, quitting Sep 9 05:41:17.390691 google_oslogin_nss_cache[1496]: oslogin_cache_refresh[1496]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:41:17.390790 oslogin_cache_refresh[1496]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:41:17.391874 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:41:17.402060 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:41:17.404356 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:41:17.405133 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:41:17.406244 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:41:17.409848 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:41:17.412888 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:41:17.414918 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:41:17.416914 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:41:17.418558 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:41:17.418913 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:41:17.419313 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:41:17.426078 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:41:17.428433 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:41:17.428863 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:41:17.429413 jq[1512]: true Sep 9 05:41:17.433984 extend-filesystems[1495]: Found /dev/vda6 Sep 9 05:41:17.435838 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:41:17.436140 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:41:17.455319 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:41:17.463342 jq[1522]: true Sep 9 05:41:17.469699 extend-filesystems[1495]: Found /dev/vda9 Sep 9 05:41:17.471909 extend-filesystems[1495]: Checking size of /dev/vda9 Sep 9 05:41:17.486183 tar[1519]: linux-amd64/LICENSE Sep 9 05:41:17.486183 tar[1519]: linux-amd64/helm Sep 9 05:41:17.486541 update_engine[1511]: I20250909 05:41:17.485085 1511 main.cc:92] Flatcar Update Engine starting Sep 9 05:41:17.510324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:41:17.521549 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:41:17.579177 systemd-logind[1510]: New seat seat0. Sep 9 05:41:17.582655 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 05:41:17.589205 dbus-daemon[1492]: [system] SELinux support is enabled Sep 9 05:41:17.589420 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:41:17.594643 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:41:17.595639 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:41:17.601322 update_engine[1511]: I20250909 05:41:17.601269 1511 update_check_scheduler.cc:74] Next update check in 3m37s Sep 9 05:41:17.603143 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:41:17.604444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:41:17.606663 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 05:41:17.604468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:41:17.604525 systemd-networkd[1491]: lo: Link UP Sep 9 05:41:17.604529 systemd-networkd[1491]: lo: Gained carrier Sep 9 05:41:17.605710 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:41:17.605727 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:41:17.606938 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:41:17.613140 systemd-networkd[1491]: Enumeration completed Sep 9 05:41:17.614429 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:17.614439 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:41:17.615919 systemd-networkd[1491]: eth0: Link UP Sep 9 05:41:17.616217 systemd-networkd[1491]: eth0: Gained carrier Sep 9 05:41:17.616239 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:41:17.619033 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:41:17.620452 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:41:17.622494 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:41:17.624920 systemd[1]: Reached target network.target - Network. Sep 9 05:41:17.628505 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:41:17.633077 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:41:17.633878 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Sep 9 05:41:17.633967 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:41:19.495020 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 05:41:19.495071 systemd-timesyncd[1446]: Initial clock synchronization to Tue 2025-09-09 05:41:19.494926 UTC. Sep 9 05:41:19.495266 systemd-resolved[1406]: Clock change detected. Flushing caches. Sep 9 05:41:19.496998 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:41:19.528011 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:41:19.545101 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:41:19.550118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:19.563379 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 05:41:19.563799 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 05:41:19.563967 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 05:41:19.606830 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:41:19.610101 systemd-logind[1510]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:41:19.610826 extend-filesystems[1495]: Resized partition /dev/vda9 Sep 9 05:41:19.620083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:41:19.621104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:19.622881 extend-filesystems[1591]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:41:19.625989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:41:19.643741 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 05:41:19.650578 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:41:19.649178 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:41:19.654631 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:41:19.674052 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 05:41:19.724269 extend-filesystems[1591]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:41:19.724269 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:41:19.724269 extend-filesystems[1591]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 05:41:19.728227 extend-filesystems[1495]: Resized filesystem in /dev/vda9 Sep 9 05:41:19.731470 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:41:19.733929 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:41:19.792403 kernel: kvm_amd: TSC scaling supported Sep 9 05:41:19.792500 kernel: kvm_amd: Nested Virtualization enabled Sep 9 05:41:19.792515 kernel: kvm_amd: Nested Paging enabled Sep 9 05:41:19.792528 kernel: kvm_amd: LBR virtualization supported Sep 9 05:41:19.792540 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 05:41:19.795388 kernel: kvm_amd: Virtual GIF supported Sep 9 05:41:19.793986 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:41:19.834917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:41:19.845914 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:41:19.852129 tar[1519]: linux-amd64/README.md Sep 9 05:41:19.880757 containerd[1583]: time="2025-09-09T05:41:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:41:19.881442 containerd[1583]: time="2025-09-09T05:41:19.881407124Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:41:19.883188 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:41:19.893114 containerd[1583]: time="2025-09-09T05:41:19.893055383Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.692µs" Sep 9 05:41:19.893114 containerd[1583]: time="2025-09-09T05:41:19.893097292Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:41:19.893114 containerd[1583]: time="2025-09-09T05:41:19.893116237Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:41:19.893350 containerd[1583]: time="2025-09-09T05:41:19.893330068Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:41:19.893350 containerd[1583]: time="2025-09-09T05:41:19.893349906Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:41:19.893399 containerd[1583]: time="2025-09-09T05:41:19.893374662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:41:19.893455 containerd[1583]: time="2025-09-09T05:41:19.893435827Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:41:19.893455 containerd[1583]: time="2025-09-09T05:41:19.893449673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:41:19.893827 containerd[1583]: time="2025-09-09T05:41:19.893801673Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:41:19.893827 containerd[1583]: time="2025-09-09T05:41:19.893822262Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:41:19.893893 containerd[1583]: time="2025-09-09T05:41:19.893833483Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:41:19.893893 containerd[1583]: time="2025-09-09T05:41:19.893842289Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:41:19.893962 containerd[1583]: time="2025-09-09T05:41:19.893934923Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:41:19.894189 containerd[1583]: time="2025-09-09T05:41:19.894159655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:41:19.894227 containerd[1583]: time="2025-09-09T05:41:19.894197706Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:41:19.894227 containerd[1583]: time="2025-09-09T05:41:19.894208025Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:41:19.894287 containerd[1583]: time="2025-09-09T05:41:19.894256837Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:41:19.894551 containerd[1583]: time="2025-09-09T05:41:19.894527304Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:41:19.894612 containerd[1583]: time="2025-09-09T05:41:19.894593428Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:41:19.900835 containerd[1583]: time="2025-09-09T05:41:19.900774081Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:41:19.900835 containerd[1583]: time="2025-09-09T05:41:19.900838222Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900854262Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900865643Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900877275Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900887264Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900904386Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900920376Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900930034Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:41:19.900933 containerd[1583]: time="2025-09-09T05:41:19.900939391Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.900949300Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.900961643Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901133756Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901152932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901167048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901180283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901190643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901200361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901211051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:41:19.901212 containerd[1583]: time="2025-09-09T05:41:19.901220869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:41:19.901452 containerd[1583]: time="2025-09-09T05:41:19.901230958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:41:19.901452 containerd[1583]: time="2025-09-09T05:41:19.901240807Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:41:19.901452 containerd[1583]: time="2025-09-09T05:41:19.901253741Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:41:19.901452 containerd[1583]: time="2025-09-09T05:41:19.901321709Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:41:19.901452 containerd[1583]: time="2025-09-09T05:41:19.901332970Z" level=info msg="Start snapshots syncer" Sep 9 05:41:19.901452 containerd[1583]: time="2025-09-09T05:41:19.901356694Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:41:19.901610 containerd[1583]: time="2025-09-09T05:41:19.901579793Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:41:19.901762 containerd[1583]: time="2025-09-09T05:41:19.901625118Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:41:19.901762 containerd[1583]: time="2025-09-09T05:41:19.901683116Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:41:19.901831 containerd[1583]: time="2025-09-09T05:41:19.901800967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:41:19.901831 containerd[1583]: time="2025-09-09T05:41:19.901818591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:41:19.901831 containerd[1583]: time="2025-09-09T05:41:19.901828068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:41:19.901904 containerd[1583]: time="2025-09-09T05:41:19.901839179Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:41:19.901904 containerd[1583]: time="2025-09-09T05:41:19.901851152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:41:19.901904 containerd[1583]: time="2025-09-09T05:41:19.901860409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:41:19.901904 containerd[1583]: time="2025-09-09T05:41:19.901870308Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:41:19.901904 containerd[1583]: time="2025-09-09T05:41:19.901890385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:41:19.901904 containerd[1583]: time="2025-09-09T05:41:19.901904782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.901917977Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.901947192Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.901960206Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.901968011Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.901977108Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.901985534Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.901997907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.902007325Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.902024757Z" level=info msg="runtime interface created" Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.902029987Z" level=info msg="created NRI interface" Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.902036960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:41:19.902049 containerd[1583]: time="2025-09-09T05:41:19.902046388Z" level=info msg="Connect containerd service" Sep 9 05:41:19.902368 containerd[1583]: time="2025-09-09T05:41:19.902066656Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:41:19.903166 containerd[1583]: time="2025-09-09T05:41:19.903113960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:41:19.926632 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:41:19.951791 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:41:19.955991 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:41:19.984311 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:41:19.984631 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:41:19.987838 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:41:19.992221 containerd[1583]: time="2025-09-09T05:41:19.992157661Z" level=info msg="Start subscribing containerd event" Sep 9 05:41:19.992321 containerd[1583]: time="2025-09-09T05:41:19.992225628Z" level=info msg="Start recovering state" Sep 9 05:41:19.992407 containerd[1583]: time="2025-09-09T05:41:19.992379537Z" level=info msg="Start event monitor" Sep 9 05:41:19.992482 containerd[1583]: time="2025-09-09T05:41:19.992382092Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:41:19.992482 containerd[1583]: time="2025-09-09T05:41:19.992410004Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:41:19.992600 containerd[1583]: time="2025-09-09T05:41:19.992484564Z" level=info msg="Start streaming server" Sep 9 05:41:19.992600 containerd[1583]: time="2025-09-09T05:41:19.992504992Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:41:19.992600 containerd[1583]: time="2025-09-09T05:41:19.992510102Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:41:19.992600 containerd[1583]: time="2025-09-09T05:41:19.992581215Z" level=info msg="runtime interface starting up..." Sep 9 05:41:19.992600 containerd[1583]: time="2025-09-09T05:41:19.992589300Z" level=info msg="starting plugins..." Sep 9 05:41:19.992731 containerd[1583]: time="2025-09-09T05:41:19.992614207Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:41:19.993259 containerd[1583]: time="2025-09-09T05:41:19.992806438Z" level=info msg="containerd successfully booted in 0.112737s" Sep 9 05:41:19.992889 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:41:20.004997 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:41:20.008883 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:41:20.011672 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:41:20.013440 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:41:21.119001 systemd-networkd[1491]: eth0: Gained IPv6LL Sep 9 05:41:21.122087 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:41:21.124318 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:41:21.127564 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 05:41:21.130924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:21.133822 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:41:21.163590 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:41:21.165567 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 05:41:21.165919 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 05:41:21.169455 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:41:21.883314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:21.885364 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:41:21.886829 systemd[1]: Startup finished in 3.198s (kernel) + 7.485s (initrd) + 5.953s (userspace) = 16.637s. Sep 9 05:41:21.888261 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:22.192162 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:41:22.193625 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:48316.service - OpenSSH per-connection server daemon (10.0.0.1:48316). Sep 9 05:41:22.260116 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 48316 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:41:22.262518 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:22.269582 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:41:22.270937 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:41:22.278478 systemd-logind[1510]: New session 1 of user core. Sep 9 05:41:22.293496 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:41:22.297171 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:41:22.315996 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:41:22.318741 systemd-logind[1510]: New session c1 of user core. Sep 9 05:41:22.339485 kubelet[1672]: E0909 05:41:22.339441 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:22.343504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:22.343723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:22.344139 systemd[1]: kubelet.service: Consumed 1.022s CPU time, 266.7M memory peak. Sep 9 05:41:22.463288 systemd[1689]: Queued start job for default target default.target. Sep 9 05:41:22.486023 systemd[1689]: Created slice app.slice - User Application Slice. Sep 9 05:41:22.486048 systemd[1689]: Reached target paths.target - Paths. Sep 9 05:41:22.486084 systemd[1689]: Reached target timers.target - Timers. Sep 9 05:41:22.487592 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:41:22.498396 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:41:22.498535 systemd[1689]: Reached target sockets.target - Sockets. Sep 9 05:41:22.498576 systemd[1689]: Reached target basic.target - Basic System. Sep 9 05:41:22.498616 systemd[1689]: Reached target default.target - Main User Target. Sep 9 05:41:22.498645 systemd[1689]: Startup finished in 173ms. Sep 9 05:41:22.498926 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:41:22.500462 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:41:22.565859 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:48322.service - OpenSSH per-connection server daemon (10.0.0.1:48322). Sep 9 05:41:22.621571 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 48322 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:41:22.622948 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:22.627219 systemd-logind[1510]: New session 2 of user core. Sep 9 05:41:22.640868 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:41:22.693995 sshd[1704]: Connection closed by 10.0.0.1 port 48322 Sep 9 05:41:22.694416 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:22.716592 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:48322.service: Deactivated successfully. Sep 9 05:41:22.718500 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:41:22.719318 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:41:22.721894 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:48326.service - OpenSSH per-connection server daemon (10.0.0.1:48326). Sep 9 05:41:22.722657 systemd-logind[1510]: Removed session 2. Sep 9 05:41:22.781172 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 48326 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:41:22.782385 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:22.786434 systemd-logind[1510]: New session 3 of user core. Sep 9 05:41:22.797836 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:41:22.846631 sshd[1713]: Connection closed by 10.0.0.1 port 48326 Sep 9 05:41:22.846943 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:22.859208 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:48326.service: Deactivated successfully. Sep 9 05:41:22.860885 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:41:22.861663 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:41:22.864244 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:48334.service - OpenSSH per-connection server daemon (10.0.0.1:48334). Sep 9 05:41:22.865209 systemd-logind[1510]: Removed session 3. Sep 9 05:41:22.927044 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 48334 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:41:22.928569 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:22.932961 systemd-logind[1510]: New session 4 of user core. Sep 9 05:41:22.947880 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:41:23.001546 sshd[1722]: Connection closed by 10.0.0.1 port 48334 Sep 9 05:41:23.001828 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:23.013207 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:48334.service: Deactivated successfully. Sep 9 05:41:23.015069 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:41:23.015858 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:41:23.018400 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:48350.service - OpenSSH per-connection server daemon (10.0.0.1:48350). Sep 9 05:41:23.018985 systemd-logind[1510]: Removed session 4. Sep 9 05:41:23.076669 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 48350 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:41:23.078256 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:23.082970 systemd-logind[1510]: New session 5 of user core. Sep 9 05:41:23.092891 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:41:23.151000 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:41:23.151400 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:23.176997 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:23.178443 sshd[1731]: Connection closed by 10.0.0.1 port 48350 Sep 9 05:41:23.178858 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:23.194423 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:48350.service: Deactivated successfully. Sep 9 05:41:23.196132 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:41:23.196883 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:41:23.199564 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:48360.service - OpenSSH per-connection server daemon (10.0.0.1:48360). Sep 9 05:41:23.200116 systemd-logind[1510]: Removed session 5. Sep 9 05:41:23.250095 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 48360 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:41:23.251372 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:23.255637 systemd-logind[1510]: New session 6 of user core. Sep 9 05:41:23.269890 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:41:23.323942 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:41:23.324247 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:23.330998 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:23.337232 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:41:23.337607 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:23.347515 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:41:23.395917 augenrules[1765]: No rules Sep 9 05:41:23.397446 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:41:23.397768 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:41:23.398844 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 9 05:41:23.400194 sshd[1741]: Connection closed by 10.0.0.1 port 48360 Sep 9 05:41:23.400520 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 9 05:41:23.410419 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:48360.service: Deactivated successfully. Sep 9 05:41:23.412263 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:41:23.413115 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:41:23.415641 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:48366.service - OpenSSH per-connection server daemon (10.0.0.1:48366). Sep 9 05:41:23.416468 systemd-logind[1510]: Removed session 6. Sep 9 05:41:23.466088 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 48366 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:41:23.467419 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:41:23.472135 systemd-logind[1510]: New session 7 of user core. Sep 9 05:41:23.485852 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:41:23.540800 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:41:23.541170 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:41:23.851307 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:41:23.873288 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:41:24.108445 dockerd[1799]: time="2025-09-09T05:41:24.108304205Z" level=info msg="Starting up" Sep 9 05:41:24.109237 dockerd[1799]: time="2025-09-09T05:41:24.109209082Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:41:24.120534 dockerd[1799]: time="2025-09-09T05:41:24.120481226Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:41:24.694904 dockerd[1799]: time="2025-09-09T05:41:24.694846852Z" level=info msg="Loading containers: start." Sep 9 05:41:24.705735 kernel: Initializing XFRM netlink socket Sep 9 05:41:24.957631 systemd-networkd[1491]: docker0: Link UP Sep 9 05:41:24.963956 dockerd[1799]: time="2025-09-09T05:41:24.963799079Z" level=info msg="Loading containers: done." Sep 9 05:41:24.978937 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck139897808-merged.mount: Deactivated successfully. Sep 9 05:41:24.980149 dockerd[1799]: time="2025-09-09T05:41:24.980098522Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:41:24.980214 dockerd[1799]: time="2025-09-09T05:41:24.980195083Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:41:24.980305 dockerd[1799]: time="2025-09-09T05:41:24.980281936Z" level=info msg="Initializing buildkit" Sep 9 05:41:25.010460 dockerd[1799]: time="2025-09-09T05:41:25.010393547Z" level=info msg="Completed buildkit initialization" Sep 9 05:41:25.016606 dockerd[1799]: time="2025-09-09T05:41:25.016564933Z" level=info msg="Daemon has completed initialization" Sep 9 05:41:25.016728 dockerd[1799]: time="2025-09-09T05:41:25.016643670Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:41:25.016898 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:41:25.768635 containerd[1583]: time="2025-09-09T05:41:25.768576522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 05:41:26.453179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619866710.mount: Deactivated successfully. Sep 9 05:41:27.691297 containerd[1583]: time="2025-09-09T05:41:27.691225501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:27.692201 containerd[1583]: time="2025-09-09T05:41:27.692169392Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 05:41:27.693894 containerd[1583]: time="2025-09-09T05:41:27.693826871Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:27.698595 containerd[1583]: time="2025-09-09T05:41:27.698523540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:27.699746 containerd[1583]: time="2025-09-09T05:41:27.699687935Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.931063813s" Sep 9 05:41:27.699798 containerd[1583]: time="2025-09-09T05:41:27.699753167Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 05:41:27.700684 containerd[1583]: time="2025-09-09T05:41:27.700642936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 05:41:28.807735 containerd[1583]: time="2025-09-09T05:41:28.807662909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:28.808775 containerd[1583]: time="2025-09-09T05:41:28.808741101Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 05:41:28.810324 containerd[1583]: time="2025-09-09T05:41:28.810286570Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:28.813261 containerd[1583]: time="2025-09-09T05:41:28.813185367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:28.814329 containerd[1583]: time="2025-09-09T05:41:28.814269581Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.113589485s" Sep 9 05:41:28.814329 containerd[1583]: time="2025-09-09T05:41:28.814318914Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 05:41:28.814835 containerd[1583]: time="2025-09-09T05:41:28.814776532Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 05:41:30.196219 containerd[1583]: time="2025-09-09T05:41:30.196156837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:30.197007 containerd[1583]: time="2025-09-09T05:41:30.196973769Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 05:41:30.197967 containerd[1583]: time="2025-09-09T05:41:30.197941514Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:30.200124 containerd[1583]: time="2025-09-09T05:41:30.200087930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:30.200845 containerd[1583]: time="2025-09-09T05:41:30.200809514Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.385980533s" Sep 9 05:41:30.200845 containerd[1583]: time="2025-09-09T05:41:30.200834491Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 05:41:30.201579 containerd[1583]: time="2025-09-09T05:41:30.201549742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 05:41:31.203929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766735746.mount: Deactivated successfully. Sep 9 05:41:32.192227 containerd[1583]: time="2025-09-09T05:41:32.192112960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:32.204173 containerd[1583]: time="2025-09-09T05:41:32.204085186Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 05:41:32.212073 containerd[1583]: time="2025-09-09T05:41:32.212024248Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:32.228571 containerd[1583]: time="2025-09-09T05:41:32.228511343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:32.229278 containerd[1583]: time="2025-09-09T05:41:32.229236152Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.027656053s" Sep 9 05:41:32.229322 containerd[1583]: time="2025-09-09T05:41:32.229275536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 05:41:32.229916 containerd[1583]: time="2025-09-09T05:41:32.229872656Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 05:41:32.422314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:41:32.424223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:32.637672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:32.642286 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:41:32.824962 kubelet[2098]: E0909 05:41:32.824881 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:41:32.831829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:41:32.832093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:41:32.832588 systemd[1]: kubelet.service: Consumed 377ms CPU time, 111.1M memory peak. Sep 9 05:41:33.146831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789302507.mount: Deactivated successfully. Sep 9 05:41:33.908150 containerd[1583]: time="2025-09-09T05:41:33.908045769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:33.908974 containerd[1583]: time="2025-09-09T05:41:33.908748958Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 05:41:33.909984 containerd[1583]: time="2025-09-09T05:41:33.909912701Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:33.912785 containerd[1583]: time="2025-09-09T05:41:33.912741307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:33.913566 containerd[1583]: time="2025-09-09T05:41:33.913503977Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.683572641s" Sep 9 05:41:33.913566 containerd[1583]: time="2025-09-09T05:41:33.913550945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 05:41:33.914130 containerd[1583]: time="2025-09-09T05:41:33.914079437Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:41:34.367230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046353770.mount: Deactivated successfully. Sep 9 05:41:34.373614 containerd[1583]: time="2025-09-09T05:41:34.373512462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:41:34.374215 containerd[1583]: time="2025-09-09T05:41:34.374181396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 05:41:34.375339 containerd[1583]: time="2025-09-09T05:41:34.375306066Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:41:34.377429 containerd[1583]: time="2025-09-09T05:41:34.377390326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:41:34.378025 containerd[1583]: time="2025-09-09T05:41:34.377977948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.860269ms" Sep 9 05:41:34.378025 containerd[1583]: time="2025-09-09T05:41:34.378020718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:41:34.378472 containerd[1583]: time="2025-09-09T05:41:34.378440896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 05:41:34.873732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369139428.mount: Deactivated successfully. Sep 9 05:41:36.763090 containerd[1583]: time="2025-09-09T05:41:36.762980972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:36.763740 containerd[1583]: time="2025-09-09T05:41:36.763688409Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 05:41:36.766725 containerd[1583]: time="2025-09-09T05:41:36.766272486Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:36.769142 containerd[1583]: time="2025-09-09T05:41:36.769081244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:36.770115 containerd[1583]: time="2025-09-09T05:41:36.770085317Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.39161695s" Sep 9 05:41:36.770174 containerd[1583]: time="2025-09-09T05:41:36.770115795Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 05:41:40.879312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:40.879490 systemd[1]: kubelet.service: Consumed 377ms CPU time, 111.1M memory peak. Sep 9 05:41:40.882176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:40.907568 systemd[1]: Reload requested from client PID 2251 ('systemctl') (unit session-7.scope)... Sep 9 05:41:40.907586 systemd[1]: Reloading... Sep 9 05:41:40.994742 zram_generator::config[2296]: No configuration found. Sep 9 05:41:41.244148 systemd[1]: Reloading finished in 336 ms. Sep 9 05:41:41.319408 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:41:41.319502 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:41:41.319797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:41.319838 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.3M memory peak. Sep 9 05:41:41.321273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:41.531749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:41.547004 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:41:41.588587 kubelet[2341]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:41.588587 kubelet[2341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:41:41.588587 kubelet[2341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:41.589101 kubelet[2341]: I0909 05:41:41.588605 2341 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:41:43.395026 kubelet[2341]: I0909 05:41:43.394959 2341 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:41:43.395026 kubelet[2341]: I0909 05:41:43.394993 2341 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:41:43.395601 kubelet[2341]: I0909 05:41:43.395211 2341 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:41:43.415367 kubelet[2341]: E0909 05:41:43.415315 2341 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 05:41:43.417782 kubelet[2341]: I0909 05:41:43.417735 2341 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:41:43.428759 kubelet[2341]: I0909 05:41:43.425810 2341 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:41:43.433288 kubelet[2341]: I0909 05:41:43.433242 2341 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:41:43.433670 kubelet[2341]: I0909 05:41:43.433621 2341 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:41:43.433890 kubelet[2341]: I0909 05:41:43.433655 2341 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:41:43.433890 kubelet[2341]: I0909 05:41:43.433893 2341 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:41:43.434107 kubelet[2341]: I0909 05:41:43.433904 2341 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:41:43.434107 kubelet[2341]: I0909 05:41:43.434071 2341 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:43.436426 kubelet[2341]: I0909 05:41:43.436371 2341 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:41:43.436426 kubelet[2341]: I0909 05:41:43.436408 2341 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:41:43.436608 kubelet[2341]: I0909 05:41:43.436450 2341 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:41:43.436608 kubelet[2341]: I0909 05:41:43.436468 2341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:41:43.441996 kubelet[2341]: I0909 05:41:43.441951 2341 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:41:43.442482 kubelet[2341]: I0909 05:41:43.442447 2341 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:41:43.444287 kubelet[2341]: W0909 05:41:43.444246 2341 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:41:43.446648 kubelet[2341]: E0909 05:41:43.446388 2341 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 05:41:43.446648 kubelet[2341]: E0909 05:41:43.446576 2341 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 05:41:43.447657 kubelet[2341]: I0909 05:41:43.447637 2341 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:41:43.447750 kubelet[2341]: I0909 05:41:43.447686 2341 server.go:1289] "Started kubelet" Sep 9 05:41:43.449289 kubelet[2341]: I0909 05:41:43.449261 2341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:41:43.449365 kubelet[2341]: I0909 05:41:43.449303 2341 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:41:43.450069 kubelet[2341]: I0909 05:41:43.450013 2341 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:41:43.450334 kubelet[2341]: I0909 05:41:43.450313 2341 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:41:43.450390 kubelet[2341]: I0909 05:41:43.450340 2341 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:41:43.452844 kubelet[2341]: I0909 05:41:43.452809 2341 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:41:43.455738 kubelet[2341]: E0909 05:41:43.454758 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:41:43.455738 kubelet[2341]: I0909 05:41:43.454851 2341 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:41:43.455738 kubelet[2341]: I0909 05:41:43.455378 2341 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:41:43.455738 kubelet[2341]: I0909 05:41:43.455483 2341 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:41:43.456245 kubelet[2341]: E0909 05:41:43.456223 2341 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 05:41:43.456727 kubelet[2341]: I0909 05:41:43.456671 2341 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:41:43.456855 kubelet[2341]: I0909 05:41:43.456820 2341 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:41:43.457008 kubelet[2341]: E0909 05:41:43.456961 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Sep 9 05:41:43.457247 kubelet[2341]: E0909 05:41:43.452047 2341 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186386cf0d82cfea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:41:43.447654378 +0000 UTC m=+1.896724373,LastTimestamp:2025-09-09 05:41:43.447654378 +0000 UTC m=+1.896724373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:41:43.457680 kubelet[2341]: E0909 05:41:43.457652 2341 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:41:43.458871 kubelet[2341]: I0909 05:41:43.458848 2341 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:41:43.474977 kubelet[2341]: I0909 05:41:43.474179 2341 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:41:43.474977 kubelet[2341]: I0909 05:41:43.474201 2341 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:41:43.474977 kubelet[2341]: I0909 05:41:43.474217 2341 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:43.477129 kubelet[2341]: I0909 05:41:43.477108 2341 policy_none.go:49] "None policy: Start" Sep 9 05:41:43.477186 kubelet[2341]: I0909 05:41:43.477135 2341 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:41:43.477186 kubelet[2341]: I0909 05:41:43.477149 2341 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:41:43.478182 kubelet[2341]: I0909 05:41:43.478145 2341 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:41:43.479638 kubelet[2341]: I0909 05:41:43.479621 2341 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:41:43.479696 kubelet[2341]: I0909 05:41:43.479651 2341 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:41:43.479696 kubelet[2341]: I0909 05:41:43.479670 2341 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:41:43.479696 kubelet[2341]: I0909 05:41:43.479678 2341 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:41:43.479967 kubelet[2341]: E0909 05:41:43.479729 2341 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:41:43.480304 kubelet[2341]: E0909 05:41:43.480276 2341 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 05:41:43.486162 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:41:43.507190 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:41:43.530388 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:41:43.532020 kubelet[2341]: E0909 05:41:43.531984 2341 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:41:43.532263 kubelet[2341]: I0909 05:41:43.532233 2341 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:41:43.532316 kubelet[2341]: I0909 05:41:43.532248 2341 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:41:43.532563 kubelet[2341]: I0909 05:41:43.532444 2341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:41:43.533347 kubelet[2341]: E0909 05:41:43.533327 2341 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:41:43.533395 kubelet[2341]: E0909 05:41:43.533367 2341 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 05:41:43.594756 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 05:41:43.617261 kubelet[2341]: E0909 05:41:43.617209 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:43.618107 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 05:41:43.634440 kubelet[2341]: I0909 05:41:43.634398 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:41:43.634935 kubelet[2341]: E0909 05:41:43.634887 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 9 05:41:43.635861 kubelet[2341]: E0909 05:41:43.635823 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:43.639574 systemd[1]: Created slice kubepods-burstable-pod3135ca76ca8e399343e240caa46a1438.slice - libcontainer container kubepods-burstable-pod3135ca76ca8e399343e240caa46a1438.slice. Sep 9 05:41:43.641795 kubelet[2341]: E0909 05:41:43.641758 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:43.657466 kubelet[2341]: E0909 05:41:43.657325 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Sep 9 05:41:43.756854 kubelet[2341]: I0909 05:41:43.756692 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:43.756854 kubelet[2341]: I0909 05:41:43.756774 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:43.756854 kubelet[2341]: I0909 05:41:43.756811 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:43.757113 kubelet[2341]: I0909 05:41:43.756878 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:43.757113 kubelet[2341]: I0909 05:41:43.756903 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:43.757113 kubelet[2341]: I0909 05:41:43.756928 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:43.757113 kubelet[2341]: I0909 05:41:43.756967 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:41:43.757113 kubelet[2341]: I0909 05:41:43.756991 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:43.757281 kubelet[2341]: I0909 05:41:43.757008 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:43.837376 kubelet[2341]: I0909 05:41:43.837327 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:41:43.837841 kubelet[2341]: E0909 05:41:43.837775 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 9 05:41:43.917977 kubelet[2341]: E0909 05:41:43.917784 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:43.918686 containerd[1583]: time="2025-09-09T05:41:43.918638199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 05:41:43.937738 kubelet[2341]: E0909 05:41:43.936927 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:43.937900 containerd[1583]: time="2025-09-09T05:41:43.937456647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 05:41:43.942246 kubelet[2341]: E0909 05:41:43.942189 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:43.943269 containerd[1583]: time="2025-09-09T05:41:43.943224526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3135ca76ca8e399343e240caa46a1438,Namespace:kube-system,Attempt:0,}" Sep 9 05:41:43.946236 containerd[1583]: time="2025-09-09T05:41:43.946157087Z" level=info msg="connecting to shim 8074f8a8ca88481d74d3b4ea3f65d6dc479366beab64ef3e2024c5574af1c97c" address="unix:///run/containerd/s/e4023f770c53de1fc9b3dcecafe548be45fd43ed5e1fda20c5a0396283c553c4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:43.973738 containerd[1583]: time="2025-09-09T05:41:43.973650547Z" level=info msg="connecting to shim f0077db0cd4e2ce19d1c0b2ad67a62d38099fe8cf99dcb78396bda182504bd2a" address="unix:///run/containerd/s/427ecff1719936138044d264c0d59f49b7d75e2a7718a21abe33e1e09512071f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:43.980059 systemd[1]: Started cri-containerd-8074f8a8ca88481d74d3b4ea3f65d6dc479366beab64ef3e2024c5574af1c97c.scope - libcontainer container 8074f8a8ca88481d74d3b4ea3f65d6dc479366beab64ef3e2024c5574af1c97c. Sep 9 05:41:43.980324 containerd[1583]: time="2025-09-09T05:41:43.980063185Z" level=info msg="connecting to shim e9d63133006e43a059c5452f2a479da99e9afee35d8e44bc9c331702005968e1" address="unix:///run/containerd/s/ad7d151278cdc2345ad8b5426b9951a2fe8ee1c355ca5c02fa2fcec32d032a5b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:44.010950 systemd[1]: Started cri-containerd-f0077db0cd4e2ce19d1c0b2ad67a62d38099fe8cf99dcb78396bda182504bd2a.scope - libcontainer container f0077db0cd4e2ce19d1c0b2ad67a62d38099fe8cf99dcb78396bda182504bd2a. Sep 9 05:41:44.016130 systemd[1]: Started cri-containerd-e9d63133006e43a059c5452f2a479da99e9afee35d8e44bc9c331702005968e1.scope - libcontainer container e9d63133006e43a059c5452f2a479da99e9afee35d8e44bc9c331702005968e1. Sep 9 05:41:44.049363 containerd[1583]: time="2025-09-09T05:41:44.049305173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"8074f8a8ca88481d74d3b4ea3f65d6dc479366beab64ef3e2024c5574af1c97c\"" Sep 9 05:41:44.050528 kubelet[2341]: E0909 05:41:44.050493 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:44.056668 containerd[1583]: time="2025-09-09T05:41:44.056598974Z" level=info msg="CreateContainer within sandbox \"8074f8a8ca88481d74d3b4ea3f65d6dc479366beab64ef3e2024c5574af1c97c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:41:44.058483 kubelet[2341]: E0909 05:41:44.058136 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Sep 9 05:41:44.072188 containerd[1583]: time="2025-09-09T05:41:44.072149943Z" level=info msg="Container 967a8469be9897f8ed9486cd26a148514338ccd8f02d332b3a8d32e6a1e90964: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:44.074189 containerd[1583]: time="2025-09-09T05:41:44.074144064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3135ca76ca8e399343e240caa46a1438,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9d63133006e43a059c5452f2a479da99e9afee35d8e44bc9c331702005968e1\"" Sep 9 05:41:44.075013 kubelet[2341]: E0909 05:41:44.074993 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:44.075471 containerd[1583]: time="2025-09-09T05:41:44.075430387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0077db0cd4e2ce19d1c0b2ad67a62d38099fe8cf99dcb78396bda182504bd2a\"" Sep 9 05:41:44.076732 kubelet[2341]: E0909 05:41:44.076431 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:44.080940 containerd[1583]: time="2025-09-09T05:41:44.080883876Z" level=info msg="CreateContainer within sandbox \"e9d63133006e43a059c5452f2a479da99e9afee35d8e44bc9c331702005968e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:41:44.083576 containerd[1583]: time="2025-09-09T05:41:44.083534688Z" level=info msg="CreateContainer within sandbox \"8074f8a8ca88481d74d3b4ea3f65d6dc479366beab64ef3e2024c5574af1c97c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"967a8469be9897f8ed9486cd26a148514338ccd8f02d332b3a8d32e6a1e90964\"" Sep 9 05:41:44.084049 containerd[1583]: time="2025-09-09T05:41:44.084026861Z" level=info msg="StartContainer for \"967a8469be9897f8ed9486cd26a148514338ccd8f02d332b3a8d32e6a1e90964\"" Sep 9 05:41:44.085187 containerd[1583]: time="2025-09-09T05:41:44.085152483Z" level=info msg="connecting to shim 967a8469be9897f8ed9486cd26a148514338ccd8f02d332b3a8d32e6a1e90964" address="unix:///run/containerd/s/e4023f770c53de1fc9b3dcecafe548be45fd43ed5e1fda20c5a0396283c553c4" protocol=ttrpc version=3 Sep 9 05:41:44.085909 containerd[1583]: time="2025-09-09T05:41:44.085875378Z" level=info msg="CreateContainer within sandbox \"f0077db0cd4e2ce19d1c0b2ad67a62d38099fe8cf99dcb78396bda182504bd2a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:41:44.092769 containerd[1583]: time="2025-09-09T05:41:44.092049038Z" level=info msg="Container 8a1b0f26283629d3b7a31908f707fe0e09e3b624f2cda22d9ca19b5160a4b956: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:44.099753 containerd[1583]: time="2025-09-09T05:41:44.098780304Z" level=info msg="Container 51e9cc52e7523d05210754716896321fc582eef23e3612afd83f49248ddf5aa9: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:44.106729 containerd[1583]: time="2025-09-09T05:41:44.106666447Z" level=info msg="CreateContainer within sandbox \"e9d63133006e43a059c5452f2a479da99e9afee35d8e44bc9c331702005968e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8a1b0f26283629d3b7a31908f707fe0e09e3b624f2cda22d9ca19b5160a4b956\"" Sep 9 05:41:44.107487 containerd[1583]: time="2025-09-09T05:41:44.107433024Z" level=info msg="StartContainer for \"8a1b0f26283629d3b7a31908f707fe0e09e3b624f2cda22d9ca19b5160a4b956\"" Sep 9 05:41:44.108480 containerd[1583]: time="2025-09-09T05:41:44.108456123Z" level=info msg="connecting to shim 8a1b0f26283629d3b7a31908f707fe0e09e3b624f2cda22d9ca19b5160a4b956" address="unix:///run/containerd/s/ad7d151278cdc2345ad8b5426b9951a2fe8ee1c355ca5c02fa2fcec32d032a5b" protocol=ttrpc version=3 Sep 9 05:41:44.109965 systemd[1]: Started cri-containerd-967a8469be9897f8ed9486cd26a148514338ccd8f02d332b3a8d32e6a1e90964.scope - libcontainer container 967a8469be9897f8ed9486cd26a148514338ccd8f02d332b3a8d32e6a1e90964. Sep 9 05:41:44.110191 containerd[1583]: time="2025-09-09T05:41:44.110154048Z" level=info msg="CreateContainer within sandbox \"f0077db0cd4e2ce19d1c0b2ad67a62d38099fe8cf99dcb78396bda182504bd2a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"51e9cc52e7523d05210754716896321fc582eef23e3612afd83f49248ddf5aa9\"" Sep 9 05:41:44.111172 containerd[1583]: time="2025-09-09T05:41:44.111146350Z" level=info msg="StartContainer for \"51e9cc52e7523d05210754716896321fc582eef23e3612afd83f49248ddf5aa9\"" Sep 9 05:41:44.112263 containerd[1583]: time="2025-09-09T05:41:44.112224322Z" level=info msg="connecting to shim 51e9cc52e7523d05210754716896321fc582eef23e3612afd83f49248ddf5aa9" address="unix:///run/containerd/s/427ecff1719936138044d264c0d59f49b7d75e2a7718a21abe33e1e09512071f" protocol=ttrpc version=3 Sep 9 05:41:44.142927 systemd[1]: Started cri-containerd-8a1b0f26283629d3b7a31908f707fe0e09e3b624f2cda22d9ca19b5160a4b956.scope - libcontainer container 8a1b0f26283629d3b7a31908f707fe0e09e3b624f2cda22d9ca19b5160a4b956. Sep 9 05:41:44.157890 systemd[1]: Started cri-containerd-51e9cc52e7523d05210754716896321fc582eef23e3612afd83f49248ddf5aa9.scope - libcontainer container 51e9cc52e7523d05210754716896321fc582eef23e3612afd83f49248ddf5aa9. Sep 9 05:41:44.216445 containerd[1583]: time="2025-09-09T05:41:44.216395858Z" level=info msg="StartContainer for \"967a8469be9897f8ed9486cd26a148514338ccd8f02d332b3a8d32e6a1e90964\" returns successfully" Sep 9 05:41:44.220736 containerd[1583]: time="2025-09-09T05:41:44.219840338Z" level=info msg="StartContainer for \"51e9cc52e7523d05210754716896321fc582eef23e3612afd83f49248ddf5aa9\" returns successfully" Sep 9 05:41:44.220736 containerd[1583]: time="2025-09-09T05:41:44.220472865Z" level=info msg="StartContainer for \"8a1b0f26283629d3b7a31908f707fe0e09e3b624f2cda22d9ca19b5160a4b956\" returns successfully" Sep 9 05:41:44.240940 kubelet[2341]: I0909 05:41:44.240876 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:41:44.241216 kubelet[2341]: E0909 05:41:44.241183 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Sep 9 05:41:44.487534 kubelet[2341]: E0909 05:41:44.487364 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:44.487534 kubelet[2341]: E0909 05:41:44.487506 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:44.490170 kubelet[2341]: E0909 05:41:44.490136 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:44.491724 kubelet[2341]: E0909 05:41:44.490257 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:44.493126 kubelet[2341]: E0909 05:41:44.493095 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:44.493237 kubelet[2341]: E0909 05:41:44.493208 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:45.044420 kubelet[2341]: I0909 05:41:45.044356 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:41:45.497746 kubelet[2341]: E0909 05:41:45.497696 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:45.498737 kubelet[2341]: E0909 05:41:45.498312 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:45.499111 kubelet[2341]: E0909 05:41:45.499005 2341 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:41:45.499211 kubelet[2341]: E0909 05:41:45.499197 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:45.707613 kubelet[2341]: E0909 05:41:45.707557 2341 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 05:41:45.918419 kubelet[2341]: I0909 05:41:45.918002 2341 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:41:45.918419 kubelet[2341]: E0909 05:41:45.918045 2341 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 05:41:45.957119 kubelet[2341]: I0909 05:41:45.957074 2341 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:45.977491 kubelet[2341]: E0909 05:41:45.977437 2341 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:45.977491 kubelet[2341]: I0909 05:41:45.977474 2341 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:45.979634 kubelet[2341]: E0909 05:41:45.979613 2341 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:45.979959 kubelet[2341]: I0909 05:41:45.979634 2341 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:41:45.981526 kubelet[2341]: E0909 05:41:45.981477 2341 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 05:41:46.439196 kubelet[2341]: I0909 05:41:46.439097 2341 apiserver.go:52] "Watching apiserver" Sep 9 05:41:46.456417 kubelet[2341]: I0909 05:41:46.456323 2341 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:41:51.193598 kubelet[2341]: I0909 05:41:51.193528 2341 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:51.304432 kubelet[2341]: E0909 05:41:51.304387 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:51.421773 systemd[1]: Reload requested from client PID 2625 ('systemctl') (unit session-7.scope)... Sep 9 05:41:51.421794 systemd[1]: Reloading... Sep 9 05:41:51.504803 kubelet[2341]: E0909 05:41:51.504595 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:51.505731 zram_generator::config[2667]: No configuration found. Sep 9 05:41:51.695480 kubelet[2341]: I0909 05:41:51.695432 2341 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:51.704222 kubelet[2341]: E0909 05:41:51.704159 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:51.756208 systemd[1]: Reloading finished in 333 ms. Sep 9 05:41:51.795860 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:51.815089 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:41:51.815403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:51.815465 systemd[1]: kubelet.service: Consumed 1.826s CPU time, 132.6M memory peak. Sep 9 05:41:51.817361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:41:52.041686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:41:52.050127 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:41:52.092831 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:52.092831 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:41:52.092831 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:41:52.093250 kubelet[2713]: I0909 05:41:52.092875 2713 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:41:52.101194 kubelet[2713]: I0909 05:41:52.101132 2713 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 05:41:52.101194 kubelet[2713]: I0909 05:41:52.101168 2713 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:41:52.101467 kubelet[2713]: I0909 05:41:52.101441 2713 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 05:41:52.102755 kubelet[2713]: I0909 05:41:52.102704 2713 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 05:41:52.104799 kubelet[2713]: I0909 05:41:52.104759 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:41:52.108420 kubelet[2713]: I0909 05:41:52.108395 2713 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:41:52.113621 kubelet[2713]: I0909 05:41:52.113583 2713 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:41:52.113884 kubelet[2713]: I0909 05:41:52.113852 2713 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:41:52.114026 kubelet[2713]: I0909 05:41:52.113883 2713 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:41:52.114105 kubelet[2713]: I0909 05:41:52.114031 2713 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:41:52.114105 kubelet[2713]: I0909 05:41:52.114040 2713 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 05:41:52.114179 kubelet[2713]: I0909 05:41:52.114164 2713 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:52.114365 kubelet[2713]: I0909 05:41:52.114351 2713 kubelet.go:480] "Attempting to sync node with API server" Sep 9 05:41:52.114406 kubelet[2713]: I0909 05:41:52.114371 2713 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:41:52.114502 kubelet[2713]: I0909 05:41:52.114489 2713 kubelet.go:386] "Adding apiserver pod source" Sep 9 05:41:52.114539 kubelet[2713]: I0909 05:41:52.114511 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:41:52.120726 kubelet[2713]: I0909 05:41:52.118758 2713 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:41:52.120726 kubelet[2713]: I0909 05:41:52.119383 2713 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 05:41:52.123348 kubelet[2713]: I0909 05:41:52.123331 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:41:52.123452 kubelet[2713]: I0909 05:41:52.123442 2713 server.go:1289] "Started kubelet" Sep 9 05:41:52.123898 kubelet[2713]: I0909 05:41:52.123841 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:41:52.124250 kubelet[2713]: I0909 05:41:52.124226 2713 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:41:52.124321 kubelet[2713]: I0909 05:41:52.124283 2713 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:41:52.125008 kubelet[2713]: I0909 05:41:52.124994 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:41:52.125327 kubelet[2713]: I0909 05:41:52.125137 2713 server.go:317] "Adding debug handlers to kubelet server" Sep 9 05:41:52.126306 kubelet[2713]: I0909 05:41:52.126278 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:41:52.126699 kubelet[2713]: I0909 05:41:52.126663 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:41:52.126874 kubelet[2713]: I0909 05:41:52.126853 2713 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:41:52.127022 kubelet[2713]: I0909 05:41:52.127004 2713 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:41:52.128183 kubelet[2713]: I0909 05:41:52.128168 2713 factory.go:223] Registration of the systemd container factory successfully Sep 9 05:41:52.128346 kubelet[2713]: I0909 05:41:52.128329 2713 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:41:52.129568 kubelet[2713]: E0909 05:41:52.129539 2713 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:41:52.131204 kubelet[2713]: I0909 05:41:52.131175 2713 factory.go:223] Registration of the containerd container factory successfully Sep 9 05:41:52.142467 kubelet[2713]: I0909 05:41:52.142412 2713 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 05:41:52.143584 kubelet[2713]: I0909 05:41:52.143559 2713 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 05:41:52.143584 kubelet[2713]: I0909 05:41:52.143579 2713 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 05:41:52.143654 kubelet[2713]: I0909 05:41:52.143609 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:41:52.143654 kubelet[2713]: I0909 05:41:52.143623 2713 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 05:41:52.143697 kubelet[2713]: E0909 05:41:52.143673 2713 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:41:52.190759 kubelet[2713]: I0909 05:41:52.190706 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:41:52.190759 kubelet[2713]: I0909 05:41:52.190740 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:41:52.190759 kubelet[2713]: I0909 05:41:52.190759 2713 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:41:52.190954 kubelet[2713]: I0909 05:41:52.190884 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:41:52.190954 kubelet[2713]: I0909 05:41:52.190901 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:41:52.190954 kubelet[2713]: I0909 05:41:52.190917 2713 policy_none.go:49] "None policy: Start" Sep 9 05:41:52.190954 kubelet[2713]: I0909 05:41:52.190925 2713 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:41:52.190954 kubelet[2713]: I0909 05:41:52.190934 2713 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:41:52.191057 kubelet[2713]: I0909 05:41:52.191013 2713 state_mem.go:75] "Updated machine memory state" Sep 9 05:41:52.195290 kubelet[2713]: E0909 05:41:52.195106 2713 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 05:41:52.196019 kubelet[2713]: I0909 05:41:52.195967 2713 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:41:52.196019 kubelet[2713]: I0909 05:41:52.195985 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:41:52.196276 kubelet[2713]: I0909 05:41:52.196252 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:41:52.197949 kubelet[2713]: E0909 05:41:52.197921 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:41:52.245386 kubelet[2713]: I0909 05:41:52.245090 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:41:52.245386 kubelet[2713]: I0909 05:41:52.245204 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:52.245386 kubelet[2713]: I0909 05:41:52.245233 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:52.305334 kubelet[2713]: I0909 05:41:52.305208 2713 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:41:52.307895 kubelet[2713]: E0909 05:41:52.307846 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:52.308142 kubelet[2713]: E0909 05:41:52.307985 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:52.428283 kubelet[2713]: I0909 05:41:52.428232 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:52.428465 kubelet[2713]: I0909 05:41:52.428311 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:52.428465 kubelet[2713]: I0909 05:41:52.428335 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:52.428465 kubelet[2713]: I0909 05:41:52.428384 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:52.428541 kubelet[2713]: I0909 05:41:52.428482 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:52.428584 kubelet[2713]: I0909 05:41:52.428557 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:41:52.428622 kubelet[2713]: I0909 05:41:52.428585 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:41:52.428622 kubelet[2713]: I0909 05:41:52.428603 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:52.428670 kubelet[2713]: I0909 05:41:52.428660 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3135ca76ca8e399343e240caa46a1438-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3135ca76ca8e399343e240caa46a1438\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:52.560777 kubelet[2713]: I0909 05:41:52.560015 2713 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 05:41:52.560777 kubelet[2713]: I0909 05:41:52.560257 2713 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:41:52.600934 kubelet[2713]: E0909 05:41:52.600893 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:52.610087 kubelet[2713]: E0909 05:41:52.609274 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:52.610087 kubelet[2713]: E0909 05:41:52.609177 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:53.116828 kubelet[2713]: I0909 05:41:53.116774 2713 apiserver.go:52] "Watching apiserver" Sep 9 05:41:53.127911 kubelet[2713]: I0909 05:41:53.127859 2713 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:41:53.180376 kubelet[2713]: I0909 05:41:53.180139 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:41:53.180376 kubelet[2713]: E0909 05:41:53.180317 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:53.180702 kubelet[2713]: I0909 05:41:53.180685 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:53.208126 kubelet[2713]: E0909 05:41:53.208088 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:41:53.208304 kubelet[2713]: E0909 05:41:53.208229 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:53.234172 kubelet[2713]: E0909 05:41:53.234051 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:41:53.234727 kubelet[2713]: E0909 05:41:53.234505 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:53.234819 kubelet[2713]: I0909 05:41:53.234779 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.234768457 podStartE2EDuration="2.234768457s" podCreationTimestamp="2025-09-09 05:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:41:53.234654679 +0000 UTC m=+1.180300644" watchObservedRunningTime="2025-09-09 05:41:53.234768457 +0000 UTC m=+1.180414422" Sep 9 05:41:53.300393 kubelet[2713]: I0909 05:41:53.300336 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.300319147 podStartE2EDuration="1.300319147s" podCreationTimestamp="2025-09-09 05:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:41:53.300271866 +0000 UTC m=+1.245917831" watchObservedRunningTime="2025-09-09 05:41:53.300319147 +0000 UTC m=+1.245965112" Sep 9 05:41:53.369950 kubelet[2713]: I0909 05:41:53.369806 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.369786817 podStartE2EDuration="2.369786817s" podCreationTimestamp="2025-09-09 05:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:41:53.369757671 +0000 UTC m=+1.315403636" watchObservedRunningTime="2025-09-09 05:41:53.369786817 +0000 UTC m=+1.315432782" Sep 9 05:41:54.182140 kubelet[2713]: E0909 05:41:54.182098 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:54.182621 kubelet[2713]: E0909 05:41:54.182598 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:55.974793 kubelet[2713]: I0909 05:41:55.974750 2713 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:41:55.975296 containerd[1583]: time="2025-09-09T05:41:55.975150806Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:41:55.975570 kubelet[2713]: I0909 05:41:55.975332 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:41:57.091504 systemd[1]: Created slice kubepods-besteffort-pod02280aab_0f58_46c5_923f_471def115309.slice - libcontainer container kubepods-besteffort-pod02280aab_0f58_46c5_923f_471def115309.slice. Sep 9 05:41:57.160237 kubelet[2713]: I0909 05:41:57.160178 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02280aab-0f58-46c5-923f-471def115309-kube-proxy\") pod \"kube-proxy-9gtrd\" (UID: \"02280aab-0f58-46c5-923f-471def115309\") " pod="kube-system/kube-proxy-9gtrd" Sep 9 05:41:57.160237 kubelet[2713]: I0909 05:41:57.160230 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02280aab-0f58-46c5-923f-471def115309-xtables-lock\") pod \"kube-proxy-9gtrd\" (UID: \"02280aab-0f58-46c5-923f-471def115309\") " pod="kube-system/kube-proxy-9gtrd" Sep 9 05:41:57.160830 kubelet[2713]: I0909 05:41:57.160252 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02280aab-0f58-46c5-923f-471def115309-lib-modules\") pod \"kube-proxy-9gtrd\" (UID: \"02280aab-0f58-46c5-923f-471def115309\") " pod="kube-system/kube-proxy-9gtrd" Sep 9 05:41:57.160830 kubelet[2713]: I0909 05:41:57.160272 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rm8r\" (UniqueName: \"kubernetes.io/projected/02280aab-0f58-46c5-923f-471def115309-kube-api-access-2rm8r\") pod \"kube-proxy-9gtrd\" (UID: \"02280aab-0f58-46c5-923f-471def115309\") " pod="kube-system/kube-proxy-9gtrd" Sep 9 05:41:57.200952 systemd[1]: Created slice kubepods-besteffort-pod40f56dbe_d9ce_4e07_8ab6_67c46e4f41c9.slice - libcontainer container kubepods-besteffort-pod40f56dbe_d9ce_4e07_8ab6_67c46e4f41c9.slice. Sep 9 05:41:57.261287 kubelet[2713]: I0909 05:41:57.261193 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40f56dbe-d9ce-4e07-8ab6-67c46e4f41c9-var-lib-calico\") pod \"tigera-operator-755d956888-rlqh4\" (UID: \"40f56dbe-d9ce-4e07-8ab6-67c46e4f41c9\") " pod="tigera-operator/tigera-operator-755d956888-rlqh4" Sep 9 05:41:57.261287 kubelet[2713]: I0909 05:41:57.261229 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8v85\" (UniqueName: \"kubernetes.io/projected/40f56dbe-d9ce-4e07-8ab6-67c46e4f41c9-kube-api-access-j8v85\") pod \"tigera-operator-755d956888-rlqh4\" (UID: \"40f56dbe-d9ce-4e07-8ab6-67c46e4f41c9\") " pod="tigera-operator/tigera-operator-755d956888-rlqh4" Sep 9 05:41:57.409400 kubelet[2713]: E0909 05:41:57.409221 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:57.410010 containerd[1583]: time="2025-09-09T05:41:57.409958196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gtrd,Uid:02280aab-0f58-46c5-923f-471def115309,Namespace:kube-system,Attempt:0,}" Sep 9 05:41:57.431531 containerd[1583]: time="2025-09-09T05:41:57.431481723Z" level=info msg="connecting to shim 3c3eb433f389addcd994c95ebc99871dc365b16d2513d19313c16f580cde4389" address="unix:///run/containerd/s/3224b23276bcdd7817b886fc36327ee1ab74d427f168735250e7585fa5ea89eb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:57.474014 systemd[1]: Started cri-containerd-3c3eb433f389addcd994c95ebc99871dc365b16d2513d19313c16f580cde4389.scope - libcontainer container 3c3eb433f389addcd994c95ebc99871dc365b16d2513d19313c16f580cde4389. Sep 9 05:41:57.502581 containerd[1583]: time="2025-09-09T05:41:57.502529483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gtrd,Uid:02280aab-0f58-46c5-923f-471def115309,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c3eb433f389addcd994c95ebc99871dc365b16d2513d19313c16f580cde4389\"" Sep 9 05:41:57.503406 kubelet[2713]: E0909 05:41:57.503371 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:57.504781 containerd[1583]: time="2025-09-09T05:41:57.504737330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rlqh4,Uid:40f56dbe-d9ce-4e07-8ab6-67c46e4f41c9,Namespace:tigera-operator,Attempt:0,}" Sep 9 05:41:57.509311 containerd[1583]: time="2025-09-09T05:41:57.509275016Z" level=info msg="CreateContainer within sandbox \"3c3eb433f389addcd994c95ebc99871dc365b16d2513d19313c16f580cde4389\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:41:57.527455 containerd[1583]: time="2025-09-09T05:41:57.527343704Z" level=info msg="Container 2cac51f9127072639b87bdde726f0ecc00000541bbd31224a2c7ffb3b42fa69a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:57.530416 containerd[1583]: time="2025-09-09T05:41:57.530365556Z" level=info msg="connecting to shim 64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162" address="unix:///run/containerd/s/af254da9e58d95217d40401456339de3d3439603a7b096d708a23f3e10d6882c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:41:57.536233 containerd[1583]: time="2025-09-09T05:41:57.536135074Z" level=info msg="CreateContainer within sandbox \"3c3eb433f389addcd994c95ebc99871dc365b16d2513d19313c16f580cde4389\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2cac51f9127072639b87bdde726f0ecc00000541bbd31224a2c7ffb3b42fa69a\"" Sep 9 05:41:57.536897 containerd[1583]: time="2025-09-09T05:41:57.536882381Z" level=info msg="StartContainer for \"2cac51f9127072639b87bdde726f0ecc00000541bbd31224a2c7ffb3b42fa69a\"" Sep 9 05:41:57.538190 containerd[1583]: time="2025-09-09T05:41:57.538169770Z" level=info msg="connecting to shim 2cac51f9127072639b87bdde726f0ecc00000541bbd31224a2c7ffb3b42fa69a" address="unix:///run/containerd/s/3224b23276bcdd7817b886fc36327ee1ab74d427f168735250e7585fa5ea89eb" protocol=ttrpc version=3 Sep 9 05:41:57.627819 kubelet[2713]: E0909 05:41:57.627778 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:57.629346 kubelet[2713]: E0909 05:41:57.629195 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:57.635900 systemd[1]: Started cri-containerd-2cac51f9127072639b87bdde726f0ecc00000541bbd31224a2c7ffb3b42fa69a.scope - libcontainer container 2cac51f9127072639b87bdde726f0ecc00000541bbd31224a2c7ffb3b42fa69a. Sep 9 05:41:57.653976 systemd[1]: Started cri-containerd-64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162.scope - libcontainer container 64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162. Sep 9 05:41:57.702631 containerd[1583]: time="2025-09-09T05:41:57.702578554Z" level=info msg="StartContainer for \"2cac51f9127072639b87bdde726f0ecc00000541bbd31224a2c7ffb3b42fa69a\" returns successfully" Sep 9 05:41:57.716306 containerd[1583]: time="2025-09-09T05:41:57.716246207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rlqh4,Uid:40f56dbe-d9ce-4e07-8ab6-67c46e4f41c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162\"" Sep 9 05:41:57.718429 containerd[1583]: time="2025-09-09T05:41:57.718089467Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 05:41:58.189282 kubelet[2713]: E0909 05:41:58.189051 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:58.189282 kubelet[2713]: E0909 05:41:58.189284 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:58.189663 kubelet[2713]: E0909 05:41:58.189636 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:41:59.196499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143878290.mount: Deactivated successfully. Sep 9 05:41:59.888468 containerd[1583]: time="2025-09-09T05:41:59.888380967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:59.889387 containerd[1583]: time="2025-09-09T05:41:59.889359783Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 05:41:59.890946 containerd[1583]: time="2025-09-09T05:41:59.890897854Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:59.893208 containerd[1583]: time="2025-09-09T05:41:59.893142491Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:41:59.893903 containerd[1583]: time="2025-09-09T05:41:59.893825804Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.175583595s" Sep 9 05:41:59.893903 containerd[1583]: time="2025-09-09T05:41:59.893873755Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 05:41:59.898576 containerd[1583]: time="2025-09-09T05:41:59.898514760Z" level=info msg="CreateContainer within sandbox \"64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 05:41:59.915772 containerd[1583]: time="2025-09-09T05:41:59.915139822Z" level=info msg="Container 29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:41:59.917947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876042571.mount: Deactivated successfully. Sep 9 05:41:59.924493 containerd[1583]: time="2025-09-09T05:41:59.924441769Z" level=info msg="CreateContainer within sandbox \"64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc\"" Sep 9 05:41:59.924974 containerd[1583]: time="2025-09-09T05:41:59.924947002Z" level=info msg="StartContainer for \"29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc\"" Sep 9 05:41:59.925886 containerd[1583]: time="2025-09-09T05:41:59.925859580Z" level=info msg="connecting to shim 29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc" address="unix:///run/containerd/s/af254da9e58d95217d40401456339de3d3439603a7b096d708a23f3e10d6882c" protocol=ttrpc version=3 Sep 9 05:41:59.988905 systemd[1]: Started cri-containerd-29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc.scope - libcontainer container 29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc. Sep 9 05:42:00.025812 containerd[1583]: time="2025-09-09T05:42:00.025762950Z" level=info msg="StartContainer for \"29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc\" returns successfully" Sep 9 05:42:00.201697 kubelet[2713]: I0909 05:42:00.201611 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9gtrd" podStartSLOduration=3.201590386 podStartE2EDuration="3.201590386s" podCreationTimestamp="2025-09-09 05:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:41:58.212165681 +0000 UTC m=+6.157811636" watchObservedRunningTime="2025-09-09 05:42:00.201590386 +0000 UTC m=+8.147236351" Sep 9 05:42:00.202140 kubelet[2713]: I0909 05:42:00.201782 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-rlqh4" podStartSLOduration=1.0246168390000001 podStartE2EDuration="3.201774967s" podCreationTimestamp="2025-09-09 05:41:57 +0000 UTC" firstStartedPulling="2025-09-09 05:41:57.717466017 +0000 UTC m=+5.663111982" lastFinishedPulling="2025-09-09 05:41:59.894624145 +0000 UTC m=+7.840270110" observedRunningTime="2025-09-09 05:42:00.201769436 +0000 UTC m=+8.147415391" watchObservedRunningTime="2025-09-09 05:42:00.201774967 +0000 UTC m=+8.147420952" Sep 9 05:42:01.152907 kubelet[2713]: E0909 05:42:01.152846 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:01.196108 kubelet[2713]: E0909 05:42:01.196071 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:01.976261 systemd[1]: cri-containerd-29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc.scope: Deactivated successfully. Sep 9 05:42:01.977671 containerd[1583]: time="2025-09-09T05:42:01.977636939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc\" id:\"29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc\" pid:3050 exit_status:1 exited_at:{seconds:1757396521 nanos:977099977}" Sep 9 05:42:01.977984 containerd[1583]: time="2025-09-09T05:42:01.977960314Z" level=info msg="received exit event container_id:\"29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc\" id:\"29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc\" pid:3050 exit_status:1 exited_at:{seconds:1757396521 nanos:977099977}" Sep 9 05:42:01.999634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc-rootfs.mount: Deactivated successfully. Sep 9 05:42:03.201255 kubelet[2713]: I0909 05:42:03.201052 2713 scope.go:117] "RemoveContainer" containerID="29e9a44f298482cc54fa0b791b2380a06740c3221c1ab0e4246d64302dfbb9bc" Sep 9 05:42:03.204531 containerd[1583]: time="2025-09-09T05:42:03.203503822Z" level=info msg="CreateContainer within sandbox \"64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 9 05:42:03.226306 containerd[1583]: time="2025-09-09T05:42:03.226232772Z" level=info msg="Container 167c361f76ae207fd786311be266e75c587889da524e9309eef59b9850fa7925: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:03.233298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699918741.mount: Deactivated successfully. Sep 9 05:42:03.242279 containerd[1583]: time="2025-09-09T05:42:03.242211653Z" level=info msg="CreateContainer within sandbox \"64f69c534dc2bcc5a36ff09a92bdc307552afab4bf3f51ad02e9c3fbaab10162\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"167c361f76ae207fd786311be266e75c587889da524e9309eef59b9850fa7925\"" Sep 9 05:42:03.245130 containerd[1583]: time="2025-09-09T05:42:03.245092784Z" level=info msg="StartContainer for \"167c361f76ae207fd786311be266e75c587889da524e9309eef59b9850fa7925\"" Sep 9 05:42:03.247161 containerd[1583]: time="2025-09-09T05:42:03.246808272Z" level=info msg="connecting to shim 167c361f76ae207fd786311be266e75c587889da524e9309eef59b9850fa7925" address="unix:///run/containerd/s/af254da9e58d95217d40401456339de3d3439603a7b096d708a23f3e10d6882c" protocol=ttrpc version=3 Sep 9 05:42:03.298249 systemd[1]: Started cri-containerd-167c361f76ae207fd786311be266e75c587889da524e9309eef59b9850fa7925.scope - libcontainer container 167c361f76ae207fd786311be266e75c587889da524e9309eef59b9850fa7925. Sep 9 05:42:03.337450 containerd[1583]: time="2025-09-09T05:42:03.337411387Z" level=info msg="StartContainer for \"167c361f76ae207fd786311be266e75c587889da524e9309eef59b9850fa7925\" returns successfully" Sep 9 05:42:04.250871 update_engine[1511]: I20250909 05:42:04.250742 1511 update_attempter.cc:509] Updating boot flags... Sep 9 05:42:05.603160 sudo[1778]: pam_unix(sudo:session): session closed for user root Sep 9 05:42:05.604751 sshd[1777]: Connection closed by 10.0.0.1 port 48366 Sep 9 05:42:05.605395 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:05.609828 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:48366.service: Deactivated successfully. Sep 9 05:42:05.612458 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:42:05.612700 systemd[1]: session-7.scope: Consumed 6.137s CPU time, 225.5M memory peak. Sep 9 05:42:05.616700 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:42:05.618130 systemd-logind[1510]: Removed session 7. Sep 9 05:42:09.254662 systemd[1]: Created slice kubepods-besteffort-pod65d9646e_0b52_476d_b3e7_3ed874ca09df.slice - libcontainer container kubepods-besteffort-pod65d9646e_0b52_476d_b3e7_3ed874ca09df.slice. Sep 9 05:42:09.348825 kubelet[2713]: I0909 05:42:09.348773 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d9646e-0b52-476d-b3e7-3ed874ca09df-tigera-ca-bundle\") pod \"calico-typha-767cd7498-5mq8l\" (UID: \"65d9646e-0b52-476d-b3e7-3ed874ca09df\") " pod="calico-system/calico-typha-767cd7498-5mq8l" Sep 9 05:42:09.348825 kubelet[2713]: I0909 05:42:09.348822 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/65d9646e-0b52-476d-b3e7-3ed874ca09df-typha-certs\") pod \"calico-typha-767cd7498-5mq8l\" (UID: \"65d9646e-0b52-476d-b3e7-3ed874ca09df\") " pod="calico-system/calico-typha-767cd7498-5mq8l" Sep 9 05:42:09.349385 kubelet[2713]: I0909 05:42:09.348852 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtd6f\" (UniqueName: \"kubernetes.io/projected/65d9646e-0b52-476d-b3e7-3ed874ca09df-kube-api-access-jtd6f\") pod \"calico-typha-767cd7498-5mq8l\" (UID: \"65d9646e-0b52-476d-b3e7-3ed874ca09df\") " pod="calico-system/calico-typha-767cd7498-5mq8l" Sep 9 05:42:09.562408 kubelet[2713]: E0909 05:42:09.562274 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:09.562939 containerd[1583]: time="2025-09-09T05:42:09.562903341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-767cd7498-5mq8l,Uid:65d9646e-0b52-476d-b3e7-3ed874ca09df,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:09.639917 systemd[1]: Created slice kubepods-besteffort-pod1b389e3c_b8ea_4170_9dce_5548fedbf803.slice - libcontainer container kubepods-besteffort-pod1b389e3c_b8ea_4170_9dce_5548fedbf803.slice. Sep 9 05:42:09.648604 containerd[1583]: time="2025-09-09T05:42:09.647853711Z" level=info msg="connecting to shim f29cca80123bc5fa63cd5bc878baa8f9f00ad8938687a177f7c8be9b49233f36" address="unix:///run/containerd/s/7513ef113788f77c75724476f85b51d67bd890eb800e62d838ad625880a4d5a5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:09.651386 kubelet[2713]: I0909 05:42:09.651336 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b389e3c-b8ea-4170-9dce-5548fedbf803-tigera-ca-bundle\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651386 kubelet[2713]: I0909 05:42:09.651386 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-xtables-lock\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651600 kubelet[2713]: I0909 05:42:09.651407 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-289qh\" (UniqueName: \"kubernetes.io/projected/1b389e3c-b8ea-4170-9dce-5548fedbf803-kube-api-access-289qh\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651600 kubelet[2713]: I0909 05:42:09.651429 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-cni-bin-dir\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651600 kubelet[2713]: I0909 05:42:09.651447 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-cni-log-dir\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651600 kubelet[2713]: I0909 05:42:09.651469 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1b389e3c-b8ea-4170-9dce-5548fedbf803-node-certs\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651600 kubelet[2713]: I0909 05:42:09.651485 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-cni-net-dir\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651800 kubelet[2713]: I0909 05:42:09.651502 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-lib-modules\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651800 kubelet[2713]: I0909 05:42:09.651518 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-policysync\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651800 kubelet[2713]: I0909 05:42:09.651534 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-var-lib-calico\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651800 kubelet[2713]: I0909 05:42:09.651553 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-flexvol-driver-host\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.651800 kubelet[2713]: I0909 05:42:09.651569 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1b389e3c-b8ea-4170-9dce-5548fedbf803-var-run-calico\") pod \"calico-node-bb656\" (UID: \"1b389e3c-b8ea-4170-9dce-5548fedbf803\") " pod="calico-system/calico-node-bb656" Sep 9 05:42:09.691928 systemd[1]: Started cri-containerd-f29cca80123bc5fa63cd5bc878baa8f9f00ad8938687a177f7c8be9b49233f36.scope - libcontainer container f29cca80123bc5fa63cd5bc878baa8f9f00ad8938687a177f7c8be9b49233f36. Sep 9 05:42:09.737105 containerd[1583]: time="2025-09-09T05:42:09.737061983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-767cd7498-5mq8l,Uid:65d9646e-0b52-476d-b3e7-3ed874ca09df,Namespace:calico-system,Attempt:0,} returns sandbox id \"f29cca80123bc5fa63cd5bc878baa8f9f00ad8938687a177f7c8be9b49233f36\"" Sep 9 05:42:09.737864 kubelet[2713]: E0909 05:42:09.737829 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:09.739256 containerd[1583]: time="2025-09-09T05:42:09.739220015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 05:42:09.753902 kubelet[2713]: E0909 05:42:09.753868 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.753902 kubelet[2713]: W0909 05:42:09.753891 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.756343 kubelet[2713]: E0909 05:42:09.754805 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.758390 kubelet[2713]: E0909 05:42:09.758372 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.758532 kubelet[2713]: W0909 05:42:09.758479 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.758532 kubelet[2713]: E0909 05:42:09.758502 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.762385 kubelet[2713]: E0909 05:42:09.762362 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.762385 kubelet[2713]: W0909 05:42:09.762378 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.762501 kubelet[2713]: E0909 05:42:09.762394 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.908942 kubelet[2713]: E0909 05:42:09.908379 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:09.935649 kubelet[2713]: E0909 05:42:09.935591 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.935649 kubelet[2713]: W0909 05:42:09.935615 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.935649 kubelet[2713]: E0909 05:42:09.935644 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.935921 kubelet[2713]: E0909 05:42:09.935900 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.935921 kubelet[2713]: W0909 05:42:09.935914 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.936001 kubelet[2713]: E0909 05:42:09.935925 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.936144 kubelet[2713]: E0909 05:42:09.936124 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.936144 kubelet[2713]: W0909 05:42:09.936136 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.936144 kubelet[2713]: E0909 05:42:09.936144 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.936386 kubelet[2713]: E0909 05:42:09.936370 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.936386 kubelet[2713]: W0909 05:42:09.936381 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.936518 kubelet[2713]: E0909 05:42:09.936388 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.936580 kubelet[2713]: E0909 05:42:09.936565 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.936580 kubelet[2713]: W0909 05:42:09.936575 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.936627 kubelet[2713]: E0909 05:42:09.936583 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.936787 kubelet[2713]: E0909 05:42:09.936773 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.936787 kubelet[2713]: W0909 05:42:09.936784 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.936851 kubelet[2713]: E0909 05:42:09.936791 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.936995 kubelet[2713]: E0909 05:42:09.936967 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.936995 kubelet[2713]: W0909 05:42:09.936979 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.936995 kubelet[2713]: E0909 05:42:09.936991 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.937165 kubelet[2713]: E0909 05:42:09.937151 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.937165 kubelet[2713]: W0909 05:42:09.937160 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.937215 kubelet[2713]: E0909 05:42:09.937167 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.937364 kubelet[2713]: E0909 05:42:09.937347 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.937364 kubelet[2713]: W0909 05:42:09.937358 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.937412 kubelet[2713]: E0909 05:42:09.937366 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.937531 kubelet[2713]: E0909 05:42:09.937518 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.937531 kubelet[2713]: W0909 05:42:09.937527 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.937586 kubelet[2713]: E0909 05:42:09.937535 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.937721 kubelet[2713]: E0909 05:42:09.937696 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.937762 kubelet[2713]: W0909 05:42:09.937733 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.937762 kubelet[2713]: E0909 05:42:09.937745 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.937950 kubelet[2713]: E0909 05:42:09.937930 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.937950 kubelet[2713]: W0909 05:42:09.937943 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.938011 kubelet[2713]: E0909 05:42:09.937952 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.938166 kubelet[2713]: E0909 05:42:09.938149 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.938166 kubelet[2713]: W0909 05:42:09.938160 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.938222 kubelet[2713]: E0909 05:42:09.938170 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.938349 kubelet[2713]: E0909 05:42:09.938333 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.938349 kubelet[2713]: W0909 05:42:09.938344 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.938402 kubelet[2713]: E0909 05:42:09.938352 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.938515 kubelet[2713]: E0909 05:42:09.938502 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.938515 kubelet[2713]: W0909 05:42:09.938511 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.938564 kubelet[2713]: E0909 05:42:09.938519 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.938693 kubelet[2713]: E0909 05:42:09.938678 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.938693 kubelet[2713]: W0909 05:42:09.938688 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.938769 kubelet[2713]: E0909 05:42:09.938695 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.938914 kubelet[2713]: E0909 05:42:09.938898 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.938914 kubelet[2713]: W0909 05:42:09.938908 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.938964 kubelet[2713]: E0909 05:42:09.938916 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.939080 kubelet[2713]: E0909 05:42:09.939067 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.939080 kubelet[2713]: W0909 05:42:09.939075 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.939141 kubelet[2713]: E0909 05:42:09.939083 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.939275 kubelet[2713]: E0909 05:42:09.939261 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.939275 kubelet[2713]: W0909 05:42:09.939273 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.939326 kubelet[2713]: E0909 05:42:09.939281 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.939474 kubelet[2713]: E0909 05:42:09.939458 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.939474 kubelet[2713]: W0909 05:42:09.939469 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.939539 kubelet[2713]: E0909 05:42:09.939476 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.949135 containerd[1583]: time="2025-09-09T05:42:09.949093078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bb656,Uid:1b389e3c-b8ea-4170-9dce-5548fedbf803,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:09.953890 kubelet[2713]: E0909 05:42:09.953864 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.953890 kubelet[2713]: W0909 05:42:09.953884 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.954031 kubelet[2713]: E0909 05:42:09.953901 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.954031 kubelet[2713]: I0909 05:42:09.953928 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27062c35-7731-4470-8714-d5d5e0fdb01b-socket-dir\") pod \"csi-node-driver-l2qzh\" (UID: \"27062c35-7731-4470-8714-d5d5e0fdb01b\") " pod="calico-system/csi-node-driver-l2qzh" Sep 9 05:42:09.954301 kubelet[2713]: E0909 05:42:09.954266 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.954301 kubelet[2713]: W0909 05:42:09.954276 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.954301 kubelet[2713]: E0909 05:42:09.954285 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.954470 kubelet[2713]: I0909 05:42:09.954449 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxshs\" (UniqueName: \"kubernetes.io/projected/27062c35-7731-4470-8714-d5d5e0fdb01b-kube-api-access-nxshs\") pod \"csi-node-driver-l2qzh\" (UID: \"27062c35-7731-4470-8714-d5d5e0fdb01b\") " pod="calico-system/csi-node-driver-l2qzh" Sep 9 05:42:09.954734 kubelet[2713]: E0909 05:42:09.954665 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.954734 kubelet[2713]: W0909 05:42:09.954674 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.954734 kubelet[2713]: E0909 05:42:09.954682 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.954939 kubelet[2713]: E0909 05:42:09.954923 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.954939 kubelet[2713]: W0909 05:42:09.954931 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.954939 kubelet[2713]: E0909 05:42:09.954940 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.955215 kubelet[2713]: E0909 05:42:09.955122 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.955215 kubelet[2713]: W0909 05:42:09.955131 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.955215 kubelet[2713]: E0909 05:42:09.955138 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.955215 kubelet[2713]: I0909 05:42:09.955157 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27062c35-7731-4470-8714-d5d5e0fdb01b-varrun\") pod \"csi-node-driver-l2qzh\" (UID: \"27062c35-7731-4470-8714-d5d5e0fdb01b\") " pod="calico-system/csi-node-driver-l2qzh" Sep 9 05:42:09.955408 kubelet[2713]: E0909 05:42:09.955382 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.955408 kubelet[2713]: W0909 05:42:09.955400 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.955482 kubelet[2713]: E0909 05:42:09.955413 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.955601 kubelet[2713]: E0909 05:42:09.955582 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.955601 kubelet[2713]: W0909 05:42:09.955594 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.955601 kubelet[2713]: E0909 05:42:09.955604 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.955853 kubelet[2713]: E0909 05:42:09.955826 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.955853 kubelet[2713]: W0909 05:42:09.955837 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.955853 kubelet[2713]: E0909 05:42:09.955845 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.955922 kubelet[2713]: I0909 05:42:09.955871 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27062c35-7731-4470-8714-d5d5e0fdb01b-kubelet-dir\") pod \"csi-node-driver-l2qzh\" (UID: \"27062c35-7731-4470-8714-d5d5e0fdb01b\") " pod="calico-system/csi-node-driver-l2qzh" Sep 9 05:42:09.956202 kubelet[2713]: E0909 05:42:09.956165 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.956202 kubelet[2713]: W0909 05:42:09.956194 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.956248 kubelet[2713]: E0909 05:42:09.956216 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.956448 kubelet[2713]: E0909 05:42:09.956422 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.956448 kubelet[2713]: W0909 05:42:09.956434 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.956448 kubelet[2713]: E0909 05:42:09.956442 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.956674 kubelet[2713]: E0909 05:42:09.956657 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.956674 kubelet[2713]: W0909 05:42:09.956670 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.956771 kubelet[2713]: E0909 05:42:09.956680 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.956771 kubelet[2713]: I0909 05:42:09.956741 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27062c35-7731-4470-8714-d5d5e0fdb01b-registration-dir\") pod \"csi-node-driver-l2qzh\" (UID: \"27062c35-7731-4470-8714-d5d5e0fdb01b\") " pod="calico-system/csi-node-driver-l2qzh" Sep 9 05:42:09.956985 kubelet[2713]: E0909 05:42:09.956958 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.957030 kubelet[2713]: W0909 05:42:09.956983 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.957030 kubelet[2713]: E0909 05:42:09.956997 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.957206 kubelet[2713]: E0909 05:42:09.957193 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.957228 kubelet[2713]: W0909 05:42:09.957204 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.957228 kubelet[2713]: E0909 05:42:09.957214 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.957442 kubelet[2713]: E0909 05:42:09.957430 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.957442 kubelet[2713]: W0909 05:42:09.957440 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.957493 kubelet[2713]: E0909 05:42:09.957451 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.957642 kubelet[2713]: E0909 05:42:09.957620 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:09.957698 kubelet[2713]: W0909 05:42:09.957643 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:09.957698 kubelet[2713]: E0909 05:42:09.957654 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:09.975292 containerd[1583]: time="2025-09-09T05:42:09.975220897Z" level=info msg="connecting to shim 92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3" address="unix:///run/containerd/s/28ad04c6ae4c1b6caaed5700d35bc0d5af84eea082c444d445c1ffe96b8ac77b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:10.022913 systemd[1]: Started cri-containerd-92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3.scope - libcontainer container 92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3. Sep 9 05:42:10.058098 kubelet[2713]: E0909 05:42:10.058052 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.058098 kubelet[2713]: W0909 05:42:10.058075 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.058098 kubelet[2713]: E0909 05:42:10.058098 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.058413 kubelet[2713]: E0909 05:42:10.058391 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.058446 kubelet[2713]: W0909 05:42:10.058413 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.058446 kubelet[2713]: E0909 05:42:10.058438 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.058833 kubelet[2713]: E0909 05:42:10.058807 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.058872 kubelet[2713]: W0909 05:42:10.058829 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.058872 kubelet[2713]: E0909 05:42:10.058852 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.059073 kubelet[2713]: E0909 05:42:10.059057 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.059073 kubelet[2713]: W0909 05:42:10.059068 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.059125 kubelet[2713]: E0909 05:42:10.059077 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.059268 kubelet[2713]: E0909 05:42:10.059255 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.059304 kubelet[2713]: W0909 05:42:10.059273 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.059304 kubelet[2713]: E0909 05:42:10.059282 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.059604 kubelet[2713]: E0909 05:42:10.059586 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.059650 kubelet[2713]: W0909 05:42:10.059604 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.059650 kubelet[2713]: E0909 05:42:10.059616 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.059955 kubelet[2713]: E0909 05:42:10.059929 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.059955 kubelet[2713]: W0909 05:42:10.059946 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.059955 kubelet[2713]: E0909 05:42:10.059957 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.060149 kubelet[2713]: E0909 05:42:10.060134 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.060149 kubelet[2713]: W0909 05:42:10.060144 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.060219 kubelet[2713]: E0909 05:42:10.060153 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.060464 containerd[1583]: time="2025-09-09T05:42:10.060419546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bb656,Uid:1b389e3c-b8ea-4170-9dce-5548fedbf803,Namespace:calico-system,Attempt:0,} returns sandbox id \"92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3\"" Sep 9 05:42:10.061348 kubelet[2713]: E0909 05:42:10.061328 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.061348 kubelet[2713]: W0909 05:42:10.061343 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.061438 kubelet[2713]: E0909 05:42:10.061355 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.062576 kubelet[2713]: E0909 05:42:10.062556 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.062576 kubelet[2713]: W0909 05:42:10.062569 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.062668 kubelet[2713]: E0909 05:42:10.062579 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.062890 kubelet[2713]: E0909 05:42:10.062871 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.062890 kubelet[2713]: W0909 05:42:10.062883 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.062890 kubelet[2713]: E0909 05:42:10.062891 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.063160 kubelet[2713]: E0909 05:42:10.063141 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.063160 kubelet[2713]: W0909 05:42:10.063155 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.063210 kubelet[2713]: E0909 05:42:10.063165 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.063776 kubelet[2713]: E0909 05:42:10.063756 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.063776 kubelet[2713]: W0909 05:42:10.063768 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.063776 kubelet[2713]: E0909 05:42:10.063777 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.064129 kubelet[2713]: E0909 05:42:10.064110 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.064129 kubelet[2713]: W0909 05:42:10.064121 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.064129 kubelet[2713]: E0909 05:42:10.064130 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.064360 kubelet[2713]: E0909 05:42:10.064342 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.064360 kubelet[2713]: W0909 05:42:10.064354 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.064412 kubelet[2713]: E0909 05:42:10.064362 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.064587 kubelet[2713]: E0909 05:42:10.064570 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.064587 kubelet[2713]: W0909 05:42:10.064581 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.064587 kubelet[2713]: E0909 05:42:10.064589 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.065063 kubelet[2713]: E0909 05:42:10.065044 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.065103 kubelet[2713]: W0909 05:42:10.065061 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.065103 kubelet[2713]: E0909 05:42:10.065073 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.065359 kubelet[2713]: E0909 05:42:10.065345 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.065384 kubelet[2713]: W0909 05:42:10.065357 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.065384 kubelet[2713]: E0909 05:42:10.065367 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.065601 kubelet[2713]: E0909 05:42:10.065578 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.065601 kubelet[2713]: W0909 05:42:10.065590 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.065601 kubelet[2713]: E0909 05:42:10.065600 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.065870 kubelet[2713]: E0909 05:42:10.065856 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.065870 kubelet[2713]: W0909 05:42:10.065867 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.065927 kubelet[2713]: E0909 05:42:10.065876 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.066109 kubelet[2713]: E0909 05:42:10.066096 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.066193 kubelet[2713]: W0909 05:42:10.066108 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.066193 kubelet[2713]: E0909 05:42:10.066117 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.066325 kubelet[2713]: E0909 05:42:10.066311 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.066350 kubelet[2713]: W0909 05:42:10.066323 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.066350 kubelet[2713]: E0909 05:42:10.066332 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.066587 kubelet[2713]: E0909 05:42:10.066574 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.066587 kubelet[2713]: W0909 05:42:10.066585 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.066661 kubelet[2713]: E0909 05:42:10.066595 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.066884 kubelet[2713]: E0909 05:42:10.066871 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.066884 kubelet[2713]: W0909 05:42:10.066882 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.066942 kubelet[2713]: E0909 05:42:10.066892 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.067293 kubelet[2713]: E0909 05:42:10.067280 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.067329 kubelet[2713]: W0909 05:42:10.067291 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.067329 kubelet[2713]: E0909 05:42:10.067302 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:10.076541 kubelet[2713]: E0909 05:42:10.076489 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:10.076541 kubelet[2713]: W0909 05:42:10.076513 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:10.076541 kubelet[2713]: E0909 05:42:10.076533 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:11.144279 kubelet[2713]: E0909 05:42:11.144231 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:11.531599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747140058.mount: Deactivated successfully. Sep 9 05:42:13.145261 kubelet[2713]: E0909 05:42:13.144704 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:13.226543 containerd[1583]: time="2025-09-09T05:42:13.226493139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:13.228188 containerd[1583]: time="2025-09-09T05:42:13.227744631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 05:42:13.229132 containerd[1583]: time="2025-09-09T05:42:13.229085181Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:13.231424 containerd[1583]: time="2025-09-09T05:42:13.231380054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:13.232141 containerd[1583]: time="2025-09-09T05:42:13.232098300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.492836387s" Sep 9 05:42:13.232141 containerd[1583]: time="2025-09-09T05:42:13.232136011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 05:42:13.233116 containerd[1583]: time="2025-09-09T05:42:13.233091184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 05:42:13.245943 containerd[1583]: time="2025-09-09T05:42:13.245886220Z" level=info msg="CreateContainer within sandbox \"f29cca80123bc5fa63cd5bc878baa8f9f00ad8938687a177f7c8be9b49233f36\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 05:42:13.256775 containerd[1583]: time="2025-09-09T05:42:13.254668890Z" level=info msg="Container bc23d332ad420c4c943c90533344d7aa144af5d5cbcb8ce88711740c8feddd60: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:13.267135 containerd[1583]: time="2025-09-09T05:42:13.267092765Z" level=info msg="CreateContainer within sandbox \"f29cca80123bc5fa63cd5bc878baa8f9f00ad8938687a177f7c8be9b49233f36\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bc23d332ad420c4c943c90533344d7aa144af5d5cbcb8ce88711740c8feddd60\"" Sep 9 05:42:13.267774 containerd[1583]: time="2025-09-09T05:42:13.267704820Z" level=info msg="StartContainer for \"bc23d332ad420c4c943c90533344d7aa144af5d5cbcb8ce88711740c8feddd60\"" Sep 9 05:42:13.269071 containerd[1583]: time="2025-09-09T05:42:13.269036544Z" level=info msg="connecting to shim bc23d332ad420c4c943c90533344d7aa144af5d5cbcb8ce88711740c8feddd60" address="unix:///run/containerd/s/7513ef113788f77c75724476f85b51d67bd890eb800e62d838ad625880a4d5a5" protocol=ttrpc version=3 Sep 9 05:42:13.292907 systemd[1]: Started cri-containerd-bc23d332ad420c4c943c90533344d7aa144af5d5cbcb8ce88711740c8feddd60.scope - libcontainer container bc23d332ad420c4c943c90533344d7aa144af5d5cbcb8ce88711740c8feddd60. Sep 9 05:42:13.356241 containerd[1583]: time="2025-09-09T05:42:13.356175982Z" level=info msg="StartContainer for \"bc23d332ad420c4c943c90533344d7aa144af5d5cbcb8ce88711740c8feddd60\" returns successfully" Sep 9 05:42:14.227381 kubelet[2713]: E0909 05:42:14.227351 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:14.264747 kubelet[2713]: E0909 05:42:14.264687 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.264747 kubelet[2713]: W0909 05:42:14.264733 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.264952 kubelet[2713]: E0909 05:42:14.264761 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.264952 kubelet[2713]: E0909 05:42:14.264936 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.264952 kubelet[2713]: W0909 05:42:14.264947 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.265041 kubelet[2713]: E0909 05:42:14.264959 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.265179 kubelet[2713]: E0909 05:42:14.265158 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.265179 kubelet[2713]: W0909 05:42:14.265171 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.265298 kubelet[2713]: E0909 05:42:14.265181 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.265462 kubelet[2713]: E0909 05:42:14.265447 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.265462 kubelet[2713]: W0909 05:42:14.265458 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.265563 kubelet[2713]: E0909 05:42:14.265469 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.265815 kubelet[2713]: E0909 05:42:14.265784 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.265815 kubelet[2713]: W0909 05:42:14.265798 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.265815 kubelet[2713]: E0909 05:42:14.265810 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.265990 kubelet[2713]: E0909 05:42:14.265974 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.265990 kubelet[2713]: W0909 05:42:14.265985 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.266076 kubelet[2713]: E0909 05:42:14.265994 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.266164 kubelet[2713]: E0909 05:42:14.266150 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.266164 kubelet[2713]: W0909 05:42:14.266160 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.266258 kubelet[2713]: E0909 05:42:14.266169 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.266346 kubelet[2713]: E0909 05:42:14.266330 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.266346 kubelet[2713]: W0909 05:42:14.266342 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.266438 kubelet[2713]: E0909 05:42:14.266351 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.266546 kubelet[2713]: E0909 05:42:14.266531 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.266546 kubelet[2713]: W0909 05:42:14.266542 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.266641 kubelet[2713]: E0909 05:42:14.266551 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.266791 kubelet[2713]: E0909 05:42:14.266776 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.266791 kubelet[2713]: W0909 05:42:14.266789 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.266866 kubelet[2713]: E0909 05:42:14.266800 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.266981 kubelet[2713]: E0909 05:42:14.266968 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.267005 kubelet[2713]: W0909 05:42:14.266979 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.267005 kubelet[2713]: E0909 05:42:14.266989 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.267165 kubelet[2713]: E0909 05:42:14.267145 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.267165 kubelet[2713]: W0909 05:42:14.267156 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.267220 kubelet[2713]: E0909 05:42:14.267165 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.267336 kubelet[2713]: E0909 05:42:14.267324 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.267360 kubelet[2713]: W0909 05:42:14.267334 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.267360 kubelet[2713]: E0909 05:42:14.267343 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.267520 kubelet[2713]: E0909 05:42:14.267508 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.267541 kubelet[2713]: W0909 05:42:14.267518 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.267541 kubelet[2713]: E0909 05:42:14.267527 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.267723 kubelet[2713]: E0909 05:42:14.267700 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.267747 kubelet[2713]: W0909 05:42:14.267726 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.267747 kubelet[2713]: E0909 05:42:14.267735 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.296167 kubelet[2713]: E0909 05:42:14.296123 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.296167 kubelet[2713]: W0909 05:42:14.296148 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.296167 kubelet[2713]: E0909 05:42:14.296170 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.296428 kubelet[2713]: E0909 05:42:14.296365 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.296428 kubelet[2713]: W0909 05:42:14.296377 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.296428 kubelet[2713]: E0909 05:42:14.296387 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.296595 kubelet[2713]: E0909 05:42:14.296580 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.296595 kubelet[2713]: W0909 05:42:14.296593 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.296669 kubelet[2713]: E0909 05:42:14.296602 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.296794 kubelet[2713]: E0909 05:42:14.296781 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.296794 kubelet[2713]: W0909 05:42:14.296789 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.296854 kubelet[2713]: E0909 05:42:14.296796 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.296944 kubelet[2713]: E0909 05:42:14.296931 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.296944 kubelet[2713]: W0909 05:42:14.296939 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.297013 kubelet[2713]: E0909 05:42:14.296945 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.297126 kubelet[2713]: E0909 05:42:14.297112 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.297126 kubelet[2713]: W0909 05:42:14.297121 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.297185 kubelet[2713]: E0909 05:42:14.297128 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.297378 kubelet[2713]: E0909 05:42:14.297363 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.297378 kubelet[2713]: W0909 05:42:14.297374 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.297460 kubelet[2713]: E0909 05:42:14.297383 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.297562 kubelet[2713]: E0909 05:42:14.297549 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.297562 kubelet[2713]: W0909 05:42:14.297558 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.297643 kubelet[2713]: E0909 05:42:14.297565 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.297764 kubelet[2713]: E0909 05:42:14.297751 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.297764 kubelet[2713]: W0909 05:42:14.297760 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.297841 kubelet[2713]: E0909 05:42:14.297768 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.297932 kubelet[2713]: E0909 05:42:14.297919 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.297932 kubelet[2713]: W0909 05:42:14.297927 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.298001 kubelet[2713]: E0909 05:42:14.297934 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.298120 kubelet[2713]: E0909 05:42:14.298107 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.298120 kubelet[2713]: W0909 05:42:14.298114 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.298186 kubelet[2713]: E0909 05:42:14.298121 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.298357 kubelet[2713]: E0909 05:42:14.298334 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.298357 kubelet[2713]: W0909 05:42:14.298344 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.298357 kubelet[2713]: E0909 05:42:14.298352 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.298514 kubelet[2713]: E0909 05:42:14.298502 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.298514 kubelet[2713]: W0909 05:42:14.298510 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.298597 kubelet[2713]: E0909 05:42:14.298517 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.298694 kubelet[2713]: E0909 05:42:14.298681 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.298694 kubelet[2713]: W0909 05:42:14.298689 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.298778 kubelet[2713]: E0909 05:42:14.298696 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.298858 kubelet[2713]: E0909 05:42:14.298846 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.298858 kubelet[2713]: W0909 05:42:14.298854 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.298923 kubelet[2713]: E0909 05:42:14.298861 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.299035 kubelet[2713]: E0909 05:42:14.299023 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.299035 kubelet[2713]: W0909 05:42:14.299031 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.299107 kubelet[2713]: E0909 05:42:14.299038 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.299273 kubelet[2713]: E0909 05:42:14.299260 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.299273 kubelet[2713]: W0909 05:42:14.299269 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.299349 kubelet[2713]: E0909 05:42:14.299277 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:14.299661 kubelet[2713]: E0909 05:42:14.299649 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:42:14.299661 kubelet[2713]: W0909 05:42:14.299658 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:42:14.299743 kubelet[2713]: E0909 05:42:14.299665 2713 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:42:15.098005 containerd[1583]: time="2025-09-09T05:42:15.097963127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:15.098894 containerd[1583]: time="2025-09-09T05:42:15.098864608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 05:42:15.100335 containerd[1583]: time="2025-09-09T05:42:15.100275049Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:15.102620 containerd[1583]: time="2025-09-09T05:42:15.102575848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:15.103286 containerd[1583]: time="2025-09-09T05:42:15.103245322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.870039811s" Sep 9 05:42:15.103286 containerd[1583]: time="2025-09-09T05:42:15.103281620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 05:42:15.108167 containerd[1583]: time="2025-09-09T05:42:15.108111692Z" level=info msg="CreateContainer within sandbox \"92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 05:42:15.115891 containerd[1583]: time="2025-09-09T05:42:15.115844067Z" level=info msg="Container 73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:15.125963 containerd[1583]: time="2025-09-09T05:42:15.125897933Z" level=info msg="CreateContainer within sandbox \"92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63\"" Sep 9 05:42:15.126459 containerd[1583]: time="2025-09-09T05:42:15.126433634Z" level=info msg="StartContainer for \"73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63\"" Sep 9 05:42:15.128011 containerd[1583]: time="2025-09-09T05:42:15.127981704Z" level=info msg="connecting to shim 73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63" address="unix:///run/containerd/s/28ad04c6ae4c1b6caaed5700d35bc0d5af84eea082c444d445c1ffe96b8ac77b" protocol=ttrpc version=3 Sep 9 05:42:15.146272 kubelet[2713]: E0909 05:42:15.145952 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:15.152892 systemd[1]: Started cri-containerd-73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63.scope - libcontainer container 73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63. Sep 9 05:42:15.200319 containerd[1583]: time="2025-09-09T05:42:15.200263637Z" level=info msg="StartContainer for \"73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63\" returns successfully" Sep 9 05:42:15.207399 systemd[1]: cri-containerd-73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63.scope: Deactivated successfully. Sep 9 05:42:15.209404 containerd[1583]: time="2025-09-09T05:42:15.209360617Z" level=info msg="received exit event container_id:\"73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63\" id:\"73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63\" pid:3474 exited_at:{seconds:1757396535 nanos:209047457}" Sep 9 05:42:15.209543 containerd[1583]: time="2025-09-09T05:42:15.209383430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63\" id:\"73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63\" pid:3474 exited_at:{seconds:1757396535 nanos:209047457}" Sep 9 05:42:15.231661 kubelet[2713]: I0909 05:42:15.231623 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:42:15.232031 kubelet[2713]: E0909 05:42:15.231948 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:15.235937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73c1b8cb05d04cd6e80caaaee7a3e9e24973ec33f4dcf219dea4da83ffa6db63-rootfs.mount: Deactivated successfully. Sep 9 05:42:15.247395 kubelet[2713]: I0909 05:42:15.247316 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-767cd7498-5mq8l" podStartSLOduration=2.753268554 podStartE2EDuration="6.247296389s" podCreationTimestamp="2025-09-09 05:42:09 +0000 UTC" firstStartedPulling="2025-09-09 05:42:09.738956156 +0000 UTC m=+17.684602121" lastFinishedPulling="2025-09-09 05:42:13.232983991 +0000 UTC m=+21.178629956" observedRunningTime="2025-09-09 05:42:14.235838857 +0000 UTC m=+22.181484822" watchObservedRunningTime="2025-09-09 05:42:15.247296389 +0000 UTC m=+23.192942354" Sep 9 05:42:16.235966 containerd[1583]: time="2025-09-09T05:42:16.235909715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 05:42:17.144631 kubelet[2713]: E0909 05:42:17.144544 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:19.144458 kubelet[2713]: E0909 05:42:19.144392 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:20.823404 containerd[1583]: time="2025-09-09T05:42:20.823331230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:20.824169 containerd[1583]: time="2025-09-09T05:42:20.824131587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 05:42:20.825517 containerd[1583]: time="2025-09-09T05:42:20.825432988Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:20.827611 containerd[1583]: time="2025-09-09T05:42:20.827566727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:20.828159 containerd[1583]: time="2025-09-09T05:42:20.828127353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.592180888s" Sep 9 05:42:20.828159 containerd[1583]: time="2025-09-09T05:42:20.828156237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 05:42:20.833446 containerd[1583]: time="2025-09-09T05:42:20.833411165Z" level=info msg="CreateContainer within sandbox \"92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 05:42:20.844786 containerd[1583]: time="2025-09-09T05:42:20.844741786Z" level=info msg="Container a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:20.855734 containerd[1583]: time="2025-09-09T05:42:20.855649359Z" level=info msg="CreateContainer within sandbox \"92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686\"" Sep 9 05:42:20.859732 containerd[1583]: time="2025-09-09T05:42:20.857759694Z" level=info msg="StartContainer for \"a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686\"" Sep 9 05:42:20.859732 containerd[1583]: time="2025-09-09T05:42:20.859327577Z" level=info msg="connecting to shim a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686" address="unix:///run/containerd/s/28ad04c6ae4c1b6caaed5700d35bc0d5af84eea082c444d445c1ffe96b8ac77b" protocol=ttrpc version=3 Sep 9 05:42:20.882877 systemd[1]: Started cri-containerd-a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686.scope - libcontainer container a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686. Sep 9 05:42:20.932103 containerd[1583]: time="2025-09-09T05:42:20.932050997Z" level=info msg="StartContainer for \"a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686\" returns successfully" Sep 9 05:42:21.145061 kubelet[2713]: E0909 05:42:21.144868 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:23.144847 kubelet[2713]: E0909 05:42:23.144788 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:24.082785 systemd[1]: cri-containerd-a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686.scope: Deactivated successfully. Sep 9 05:42:24.083149 systemd[1]: cri-containerd-a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686.scope: Consumed 565ms CPU time, 181.3M memory peak, 4.4M read from disk, 171.3M written to disk. Sep 9 05:42:24.086198 containerd[1583]: time="2025-09-09T05:42:24.086146461Z" level=info msg="received exit event container_id:\"a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686\" id:\"a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686\" pid:3533 exited_at:{seconds:1757396544 nanos:85935232}" Sep 9 05:42:24.086675 containerd[1583]: time="2025-09-09T05:42:24.086256287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686\" id:\"a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686\" pid:3533 exited_at:{seconds:1757396544 nanos:85935232}" Sep 9 05:42:24.089437 containerd[1583]: time="2025-09-09T05:42:24.089369124Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:42:24.111264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4914f57e5e6183f25d87711c8d7b0c82b6f8736ed03a524249eb4b334452686-rootfs.mount: Deactivated successfully. Sep 9 05:42:24.160243 kubelet[2713]: I0909 05:42:24.160196 2713 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:42:24.938520 systemd[1]: Created slice kubepods-burstable-pod39fc3c9d_1597_46bb_be3b_930a1925c3d7.slice - libcontainer container kubepods-burstable-pod39fc3c9d_1597_46bb_be3b_930a1925c3d7.slice. Sep 9 05:42:24.950359 systemd[1]: Created slice kubepods-burstable-podf13a110c_4310_4f51_b85e_910b01fecb27.slice - libcontainer container kubepods-burstable-podf13a110c_4310_4f51_b85e_910b01fecb27.slice. Sep 9 05:42:24.959595 systemd[1]: Created slice kubepods-besteffort-pod0845cde2_e85c_4079_a6ed_7f136bca6416.slice - libcontainer container kubepods-besteffort-pod0845cde2_e85c_4079_a6ed_7f136bca6416.slice. Sep 9 05:42:24.967591 kubelet[2713]: I0909 05:42:24.967504 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0845cde2-e85c-4079-a6ed-7f136bca6416-tigera-ca-bundle\") pod \"calico-kube-controllers-5f6b655cfb-hwt2z\" (UID: \"0845cde2-e85c-4079-a6ed-7f136bca6416\") " pod="calico-system/calico-kube-controllers-5f6b655cfb-hwt2z" Sep 9 05:42:24.967591 kubelet[2713]: I0909 05:42:24.967556 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzq7x\" (UniqueName: \"kubernetes.io/projected/39fc3c9d-1597-46bb-be3b-930a1925c3d7-kube-api-access-tzq7x\") pod \"coredns-674b8bbfcf-ksb48\" (UID: \"39fc3c9d-1597-46bb-be3b-930a1925c3d7\") " pod="kube-system/coredns-674b8bbfcf-ksb48" Sep 9 05:42:24.967591 kubelet[2713]: I0909 05:42:24.967580 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmr2x\" (UniqueName: \"kubernetes.io/projected/f13a110c-4310-4f51-b85e-910b01fecb27-kube-api-access-dmr2x\") pod \"coredns-674b8bbfcf-dvh8r\" (UID: \"f13a110c-4310-4f51-b85e-910b01fecb27\") " pod="kube-system/coredns-674b8bbfcf-dvh8r" Sep 9 05:42:24.967857 kubelet[2713]: I0909 05:42:24.967618 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f13a110c-4310-4f51-b85e-910b01fecb27-config-volume\") pod \"coredns-674b8bbfcf-dvh8r\" (UID: \"f13a110c-4310-4f51-b85e-910b01fecb27\") " pod="kube-system/coredns-674b8bbfcf-dvh8r" Sep 9 05:42:24.967857 kubelet[2713]: I0909 05:42:24.967658 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt842\" (UniqueName: \"kubernetes.io/projected/0845cde2-e85c-4079-a6ed-7f136bca6416-kube-api-access-lt842\") pod \"calico-kube-controllers-5f6b655cfb-hwt2z\" (UID: \"0845cde2-e85c-4079-a6ed-7f136bca6416\") " pod="calico-system/calico-kube-controllers-5f6b655cfb-hwt2z" Sep 9 05:42:24.967857 kubelet[2713]: I0909 05:42:24.967681 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39fc3c9d-1597-46bb-be3b-930a1925c3d7-config-volume\") pod \"coredns-674b8bbfcf-ksb48\" (UID: \"39fc3c9d-1597-46bb-be3b-930a1925c3d7\") " pod="kube-system/coredns-674b8bbfcf-ksb48" Sep 9 05:42:24.969315 systemd[1]: Created slice kubepods-besteffort-pod30dc47f2_f3b7_46db_bf49_78384d89673d.slice - libcontainer container kubepods-besteffort-pod30dc47f2_f3b7_46db_bf49_78384d89673d.slice. Sep 9 05:42:24.978337 systemd[1]: Created slice kubepods-besteffort-pod136f3715_9e30_4afc_89b7_d12a62b46ac0.slice - libcontainer container kubepods-besteffort-pod136f3715_9e30_4afc_89b7_d12a62b46ac0.slice. Sep 9 05:42:24.983052 systemd[1]: Created slice kubepods-besteffort-pod7e528bc2_eb93_4f51_aa36_c14d71a5e78c.slice - libcontainer container kubepods-besteffort-pod7e528bc2_eb93_4f51_aa36_c14d71a5e78c.slice. Sep 9 05:42:24.991926 systemd[1]: Created slice kubepods-besteffort-podb4934adc_622b_464a_865d_a0a030e69140.slice - libcontainer container kubepods-besteffort-podb4934adc_622b_464a_865d_a0a030e69140.slice. Sep 9 05:42:25.068531 kubelet[2713]: I0909 05:42:25.068451 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/136f3715-9e30-4afc-89b7-d12a62b46ac0-calico-apiserver-certs\") pod \"calico-apiserver-649ff699f7-l7lvk\" (UID: \"136f3715-9e30-4afc-89b7-d12a62b46ac0\") " pod="calico-apiserver/calico-apiserver-649ff699f7-l7lvk" Sep 9 05:42:25.068531 kubelet[2713]: I0909 05:42:25.068502 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2bj\" (UniqueName: \"kubernetes.io/projected/b4934adc-622b-464a-865d-a0a030e69140-kube-api-access-xs2bj\") pod \"goldmane-54d579b49d-6vtv8\" (UID: \"b4934adc-622b-464a-865d-a0a030e69140\") " pod="calico-system/goldmane-54d579b49d-6vtv8" Sep 9 05:42:25.068531 kubelet[2713]: I0909 05:42:25.068525 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-backend-key-pair\") pod \"whisker-587464b7b9-zwrlw\" (UID: \"30dc47f2-f3b7-46db-bf49-78384d89673d\") " pod="calico-system/whisker-587464b7b9-zwrlw" Sep 9 05:42:25.068531 kubelet[2713]: I0909 05:42:25.068543 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjm7t\" (UniqueName: \"kubernetes.io/projected/30dc47f2-f3b7-46db-bf49-78384d89673d-kube-api-access-fjm7t\") pod \"whisker-587464b7b9-zwrlw\" (UID: \"30dc47f2-f3b7-46db-bf49-78384d89673d\") " pod="calico-system/whisker-587464b7b9-zwrlw" Sep 9 05:42:25.068835 kubelet[2713]: I0909 05:42:25.068564 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4934adc-622b-464a-865d-a0a030e69140-config\") pod \"goldmane-54d579b49d-6vtv8\" (UID: \"b4934adc-622b-464a-865d-a0a030e69140\") " pod="calico-system/goldmane-54d579b49d-6vtv8" Sep 9 05:42:25.068835 kubelet[2713]: I0909 05:42:25.068588 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42zf\" (UniqueName: \"kubernetes.io/projected/7e528bc2-eb93-4f51-aa36-c14d71a5e78c-kube-api-access-g42zf\") pod \"calico-apiserver-649ff699f7-lr4gx\" (UID: \"7e528bc2-eb93-4f51-aa36-c14d71a5e78c\") " pod="calico-apiserver/calico-apiserver-649ff699f7-lr4gx" Sep 9 05:42:25.068888 kubelet[2713]: I0909 05:42:25.068822 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b4934adc-622b-464a-865d-a0a030e69140-goldmane-key-pair\") pod \"goldmane-54d579b49d-6vtv8\" (UID: \"b4934adc-622b-464a-865d-a0a030e69140\") " pod="calico-system/goldmane-54d579b49d-6vtv8" Sep 9 05:42:25.069155 kubelet[2713]: I0909 05:42:25.069105 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-ca-bundle\") pod \"whisker-587464b7b9-zwrlw\" (UID: \"30dc47f2-f3b7-46db-bf49-78384d89673d\") " pod="calico-system/whisker-587464b7b9-zwrlw" Sep 9 05:42:25.069155 kubelet[2713]: I0909 05:42:25.069142 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765qx\" (UniqueName: \"kubernetes.io/projected/136f3715-9e30-4afc-89b7-d12a62b46ac0-kube-api-access-765qx\") pod \"calico-apiserver-649ff699f7-l7lvk\" (UID: \"136f3715-9e30-4afc-89b7-d12a62b46ac0\") " pod="calico-apiserver/calico-apiserver-649ff699f7-l7lvk" Sep 9 05:42:25.069242 kubelet[2713]: I0909 05:42:25.069162 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4934adc-622b-464a-865d-a0a030e69140-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-6vtv8\" (UID: \"b4934adc-622b-464a-865d-a0a030e69140\") " pod="calico-system/goldmane-54d579b49d-6vtv8" Sep 9 05:42:25.069242 kubelet[2713]: I0909 05:42:25.069209 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e528bc2-eb93-4f51-aa36-c14d71a5e78c-calico-apiserver-certs\") pod \"calico-apiserver-649ff699f7-lr4gx\" (UID: \"7e528bc2-eb93-4f51-aa36-c14d71a5e78c\") " pod="calico-apiserver/calico-apiserver-649ff699f7-lr4gx" Sep 9 05:42:25.150426 systemd[1]: Created slice kubepods-besteffort-pod27062c35_7731_4470_8714_d5d5e0fdb01b.slice - libcontainer container kubepods-besteffort-pod27062c35_7731_4470_8714_d5d5e0fdb01b.slice. Sep 9 05:42:25.153107 containerd[1583]: time="2025-09-09T05:42:25.153061254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2qzh,Uid:27062c35-7731-4470-8714-d5d5e0fdb01b,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:25.243100 containerd[1583]: time="2025-09-09T05:42:25.242967756Z" level=error msg="Failed to destroy network for sandbox \"9af99ea86afcfea78c0f1b86b141d7f31ec52d7851256aa3fc8adbeb2275e3ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.244933 containerd[1583]: time="2025-09-09T05:42:25.244887608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2qzh,Uid:27062c35-7731-4470-8714-d5d5e0fdb01b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af99ea86afcfea78c0f1b86b141d7f31ec52d7851256aa3fc8adbeb2275e3ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.246025 kubelet[2713]: E0909 05:42:25.245993 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:25.246648 containerd[1583]: time="2025-09-09T05:42:25.246622152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ksb48,Uid:39fc3c9d-1597-46bb-be3b-930a1925c3d7,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:25.251721 kubelet[2713]: E0909 05:42:25.251651 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af99ea86afcfea78c0f1b86b141d7f31ec52d7851256aa3fc8adbeb2275e3ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.251901 kubelet[2713]: E0909 05:42:25.251731 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af99ea86afcfea78c0f1b86b141d7f31ec52d7851256aa3fc8adbeb2275e3ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l2qzh" Sep 9 05:42:25.251901 kubelet[2713]: E0909 05:42:25.251754 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af99ea86afcfea78c0f1b86b141d7f31ec52d7851256aa3fc8adbeb2275e3ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l2qzh" Sep 9 05:42:25.251901 kubelet[2713]: E0909 05:42:25.251808 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l2qzh_calico-system(27062c35-7731-4470-8714-d5d5e0fdb01b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l2qzh_calico-system(27062c35-7731-4470-8714-d5d5e0fdb01b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9af99ea86afcfea78c0f1b86b141d7f31ec52d7851256aa3fc8adbeb2275e3ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l2qzh" podUID="27062c35-7731-4470-8714-d5d5e0fdb01b" Sep 9 05:42:25.257466 kubelet[2713]: E0909 05:42:25.257246 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:25.258883 containerd[1583]: time="2025-09-09T05:42:25.258825530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvh8r,Uid:f13a110c-4310-4f51-b85e-910b01fecb27,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:25.261316 containerd[1583]: time="2025-09-09T05:42:25.260824431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 05:42:25.267466 containerd[1583]: time="2025-09-09T05:42:25.267406293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6b655cfb-hwt2z,Uid:0845cde2-e85c-4079-a6ed-7f136bca6416,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:25.272963 containerd[1583]: time="2025-09-09T05:42:25.272925106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-587464b7b9-zwrlw,Uid:30dc47f2-f3b7-46db-bf49-78384d89673d,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:25.282959 containerd[1583]: time="2025-09-09T05:42:25.282914509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-l7lvk,Uid:136f3715-9e30-4afc-89b7-d12a62b46ac0,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:25.287897 containerd[1583]: time="2025-09-09T05:42:25.287845776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-lr4gx,Uid:7e528bc2-eb93-4f51-aa36-c14d71a5e78c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:25.296535 containerd[1583]: time="2025-09-09T05:42:25.296494888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6vtv8,Uid:b4934adc-622b-464a-865d-a0a030e69140,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:25.309460 containerd[1583]: time="2025-09-09T05:42:25.309399846Z" level=error msg="Failed to destroy network for sandbox \"a1fdd3f4546f81625595f16d2f556b8b60ff115513da0c3c5dbcfd8f9e002e9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.325317 containerd[1583]: time="2025-09-09T05:42:25.325240708Z" level=error msg="Failed to destroy network for sandbox \"9f32e8c874f29ef6cda4d459db7c00aa3bfce6d2b0c9e3725ccc369d53778441\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.328660 containerd[1583]: time="2025-09-09T05:42:25.328544634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ksb48,Uid:39fc3c9d-1597-46bb-be3b-930a1925c3d7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1fdd3f4546f81625595f16d2f556b8b60ff115513da0c3c5dbcfd8f9e002e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.328913 kubelet[2713]: E0909 05:42:25.328867 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1fdd3f4546f81625595f16d2f556b8b60ff115513da0c3c5dbcfd8f9e002e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.328998 kubelet[2713]: E0909 05:42:25.328945 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1fdd3f4546f81625595f16d2f556b8b60ff115513da0c3c5dbcfd8f9e002e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ksb48" Sep 9 05:42:25.328998 kubelet[2713]: E0909 05:42:25.328973 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1fdd3f4546f81625595f16d2f556b8b60ff115513da0c3c5dbcfd8f9e002e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-ksb48" Sep 9 05:42:25.329146 kubelet[2713]: E0909 05:42:25.329031 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-ksb48_kube-system(39fc3c9d-1597-46bb-be3b-930a1925c3d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-ksb48_kube-system(39fc3c9d-1597-46bb-be3b-930a1925c3d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1fdd3f4546f81625595f16d2f556b8b60ff115513da0c3c5dbcfd8f9e002e9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-ksb48" podUID="39fc3c9d-1597-46bb-be3b-930a1925c3d7" Sep 9 05:42:25.344742 containerd[1583]: time="2025-09-09T05:42:25.344621680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvh8r,Uid:f13a110c-4310-4f51-b85e-910b01fecb27,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f32e8c874f29ef6cda4d459db7c00aa3bfce6d2b0c9e3725ccc369d53778441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.346959 kubelet[2713]: E0909 05:42:25.345933 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f32e8c874f29ef6cda4d459db7c00aa3bfce6d2b0c9e3725ccc369d53778441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.346959 kubelet[2713]: E0909 05:42:25.346019 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f32e8c874f29ef6cda4d459db7c00aa3bfce6d2b0c9e3725ccc369d53778441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvh8r" Sep 9 05:42:25.346959 kubelet[2713]: E0909 05:42:25.346055 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f32e8c874f29ef6cda4d459db7c00aa3bfce6d2b0c9e3725ccc369d53778441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvh8r" Sep 9 05:42:25.347137 kubelet[2713]: E0909 05:42:25.346121 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dvh8r_kube-system(f13a110c-4310-4f51-b85e-910b01fecb27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dvh8r_kube-system(f13a110c-4310-4f51-b85e-910b01fecb27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f32e8c874f29ef6cda4d459db7c00aa3bfce6d2b0c9e3725ccc369d53778441\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvh8r" podUID="f13a110c-4310-4f51-b85e-910b01fecb27" Sep 9 05:42:25.405899 containerd[1583]: time="2025-09-09T05:42:25.405845911Z" level=error msg="Failed to destroy network for sandbox \"502d606095fd5aa2a14af2c42a765ac0981b16383d61f5be0477ab24fe271559\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.408554 containerd[1583]: time="2025-09-09T05:42:25.408484555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6b655cfb-hwt2z,Uid:0845cde2-e85c-4079-a6ed-7f136bca6416,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"502d606095fd5aa2a14af2c42a765ac0981b16383d61f5be0477ab24fe271559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.409208 kubelet[2713]: E0909 05:42:25.409163 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502d606095fd5aa2a14af2c42a765ac0981b16383d61f5be0477ab24fe271559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.409311 kubelet[2713]: E0909 05:42:25.409266 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502d606095fd5aa2a14af2c42a765ac0981b16383d61f5be0477ab24fe271559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f6b655cfb-hwt2z" Sep 9 05:42:25.409403 kubelet[2713]: E0909 05:42:25.409318 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502d606095fd5aa2a14af2c42a765ac0981b16383d61f5be0477ab24fe271559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f6b655cfb-hwt2z" Sep 9 05:42:25.409586 kubelet[2713]: E0909 05:42:25.409439 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f6b655cfb-hwt2z_calico-system(0845cde2-e85c-4079-a6ed-7f136bca6416)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f6b655cfb-hwt2z_calico-system(0845cde2-e85c-4079-a6ed-7f136bca6416)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"502d606095fd5aa2a14af2c42a765ac0981b16383d61f5be0477ab24fe271559\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f6b655cfb-hwt2z" podUID="0845cde2-e85c-4079-a6ed-7f136bca6416" Sep 9 05:42:25.436769 containerd[1583]: time="2025-09-09T05:42:25.436697043Z" level=error msg="Failed to destroy network for sandbox \"b6694323332e550967cb31021ed83a82194510e19785628a32085c1159f50d95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.438538 containerd[1583]: time="2025-09-09T05:42:25.438494835Z" level=error msg="Failed to destroy network for sandbox \"1988bdcbca2553a8a03dcacc0ff9c4f3ceaff5e1a29a56ad5444ca38b38b38bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.439068 containerd[1583]: time="2025-09-09T05:42:25.439038447Z" level=error msg="Failed to destroy network for sandbox \"c927cb372b55858dfad957b41a4b8ab5ad779083e944762db4f3e2091eec1159\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.440053 containerd[1583]: time="2025-09-09T05:42:25.439945604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6vtv8,Uid:b4934adc-622b-464a-865d-a0a030e69140,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6694323332e550967cb31021ed83a82194510e19785628a32085c1159f50d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.440291 kubelet[2713]: E0909 05:42:25.440244 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6694323332e550967cb31021ed83a82194510e19785628a32085c1159f50d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.440385 kubelet[2713]: E0909 05:42:25.440321 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6694323332e550967cb31021ed83a82194510e19785628a32085c1159f50d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6vtv8" Sep 9 05:42:25.440385 kubelet[2713]: E0909 05:42:25.440348 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6694323332e550967cb31021ed83a82194510e19785628a32085c1159f50d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6vtv8" Sep 9 05:42:25.440464 kubelet[2713]: E0909 05:42:25.440427 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-6vtv8_calico-system(b4934adc-622b-464a-865d-a0a030e69140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-6vtv8_calico-system(b4934adc-622b-464a-865d-a0a030e69140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6694323332e550967cb31021ed83a82194510e19785628a32085c1159f50d95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-6vtv8" podUID="b4934adc-622b-464a-865d-a0a030e69140" Sep 9 05:42:25.441281 containerd[1583]: time="2025-09-09T05:42:25.441241011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-lr4gx,Uid:7e528bc2-eb93-4f51-aa36-c14d71a5e78c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988bdcbca2553a8a03dcacc0ff9c4f3ceaff5e1a29a56ad5444ca38b38b38bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.441664 kubelet[2713]: E0909 05:42:25.441628 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988bdcbca2553a8a03dcacc0ff9c4f3ceaff5e1a29a56ad5444ca38b38b38bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.441783 kubelet[2713]: E0909 05:42:25.441678 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988bdcbca2553a8a03dcacc0ff9c4f3ceaff5e1a29a56ad5444ca38b38b38bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649ff699f7-lr4gx" Sep 9 05:42:25.441871 kubelet[2713]: E0909 05:42:25.441698 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1988bdcbca2553a8a03dcacc0ff9c4f3ceaff5e1a29a56ad5444ca38b38b38bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649ff699f7-lr4gx" Sep 9 05:42:25.442274 containerd[1583]: time="2025-09-09T05:42:25.442243868Z" level=error msg="Failed to destroy network for sandbox \"d56d1eff12417638e5c3b76da5127f26a67e0d85e968af8ec94e5ca2785bc00a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.443672 kubelet[2713]: E0909 05:42:25.441887 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649ff699f7-lr4gx_calico-apiserver(7e528bc2-eb93-4f51-aa36-c14d71a5e78c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649ff699f7-lr4gx_calico-apiserver(7e528bc2-eb93-4f51-aa36-c14d71a5e78c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1988bdcbca2553a8a03dcacc0ff9c4f3ceaff5e1a29a56ad5444ca38b38b38bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649ff699f7-lr4gx" podUID="7e528bc2-eb93-4f51-aa36-c14d71a5e78c" Sep 9 05:42:25.452406 containerd[1583]: time="2025-09-09T05:42:25.452345371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-l7lvk,Uid:136f3715-9e30-4afc-89b7-d12a62b46ac0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c927cb372b55858dfad957b41a4b8ab5ad779083e944762db4f3e2091eec1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.452601 kubelet[2713]: E0909 05:42:25.452554 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c927cb372b55858dfad957b41a4b8ab5ad779083e944762db4f3e2091eec1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.452667 kubelet[2713]: E0909 05:42:25.452615 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c927cb372b55858dfad957b41a4b8ab5ad779083e944762db4f3e2091eec1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649ff699f7-l7lvk" Sep 9 05:42:25.452667 kubelet[2713]: E0909 05:42:25.452646 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c927cb372b55858dfad957b41a4b8ab5ad779083e944762db4f3e2091eec1159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-649ff699f7-l7lvk" Sep 9 05:42:25.452748 kubelet[2713]: E0909 05:42:25.452720 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-649ff699f7-l7lvk_calico-apiserver(136f3715-9e30-4afc-89b7-d12a62b46ac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-649ff699f7-l7lvk_calico-apiserver(136f3715-9e30-4afc-89b7-d12a62b46ac0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c927cb372b55858dfad957b41a4b8ab5ad779083e944762db4f3e2091eec1159\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-649ff699f7-l7lvk" podUID="136f3715-9e30-4afc-89b7-d12a62b46ac0" Sep 9 05:42:25.455436 containerd[1583]: time="2025-09-09T05:42:25.455383868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-587464b7b9-zwrlw,Uid:30dc47f2-f3b7-46db-bf49-78384d89673d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d1eff12417638e5c3b76da5127f26a67e0d85e968af8ec94e5ca2785bc00a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.455554 kubelet[2713]: E0909 05:42:25.455517 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d1eff12417638e5c3b76da5127f26a67e0d85e968af8ec94e5ca2785bc00a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:42:25.455589 kubelet[2713]: E0909 05:42:25.455557 2713 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d1eff12417638e5c3b76da5127f26a67e0d85e968af8ec94e5ca2785bc00a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-587464b7b9-zwrlw" Sep 9 05:42:25.455589 kubelet[2713]: E0909 05:42:25.455578 2713 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56d1eff12417638e5c3b76da5127f26a67e0d85e968af8ec94e5ca2785bc00a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-587464b7b9-zwrlw" Sep 9 05:42:25.455663 kubelet[2713]: E0909 05:42:25.455634 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-587464b7b9-zwrlw_calico-system(30dc47f2-f3b7-46db-bf49-78384d89673d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-587464b7b9-zwrlw_calico-system(30dc47f2-f3b7-46db-bf49-78384d89673d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d56d1eff12417638e5c3b76da5127f26a67e0d85e968af8ec94e5ca2785bc00a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-587464b7b9-zwrlw" podUID="30dc47f2-f3b7-46db-bf49-78384d89673d" Sep 9 05:42:26.157813 systemd[1]: run-netns-cni\x2d10918ccf\x2d2b1b\x2dabbf\x2de7e1\x2dcb705aa0ddb3.mount: Deactivated successfully. Sep 9 05:42:27.464812 kubelet[2713]: I0909 05:42:27.464518 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:42:27.465561 kubelet[2713]: E0909 05:42:27.465017 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:28.280442 kubelet[2713]: E0909 05:42:28.280377 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:32.853545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3732441764.mount: Deactivated successfully. Sep 9 05:42:34.453612 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:60316.service - OpenSSH per-connection server daemon (10.0.0.1:60316). Sep 9 05:42:34.813751 containerd[1583]: time="2025-09-09T05:42:34.813612862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:34.817916 containerd[1583]: time="2025-09-09T05:42:34.817769452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 05:42:34.819537 containerd[1583]: time="2025-09-09T05:42:34.819471760Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:34.824457 containerd[1583]: time="2025-09-09T05:42:34.824283490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:34.825450 containerd[1583]: time="2025-09-09T05:42:34.825407693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.564542555s" Sep 9 05:42:34.825523 containerd[1583]: time="2025-09-09T05:42:34.825450643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 05:42:34.851251 containerd[1583]: time="2025-09-09T05:42:34.851205002Z" level=info msg="CreateContainer within sandbox \"92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 05:42:34.863319 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 60316 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:42:34.864654 sshd-session[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:42:34.872292 containerd[1583]: time="2025-09-09T05:42:34.872233000Z" level=info msg="Container ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:34.877106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011889909.mount: Deactivated successfully. Sep 9 05:42:34.883386 systemd-logind[1510]: New session 8 of user core. Sep 9 05:42:34.889224 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:42:34.898094 containerd[1583]: time="2025-09-09T05:42:34.898029288Z" level=info msg="CreateContainer within sandbox \"92653de20d0e85ee531b86e7b71617847f353bec90f58cc2b2ed25f3a7c58cf3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb\"" Sep 9 05:42:34.899736 containerd[1583]: time="2025-09-09T05:42:34.899509819Z" level=info msg="StartContainer for \"ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb\"" Sep 9 05:42:34.903908 containerd[1583]: time="2025-09-09T05:42:34.903847098Z" level=info msg="connecting to shim ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb" address="unix:///run/containerd/s/28ad04c6ae4c1b6caaed5700d35bc0d5af84eea082c444d445c1ffe96b8ac77b" protocol=ttrpc version=3 Sep 9 05:42:34.960870 systemd[1]: Started cri-containerd-ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb.scope - libcontainer container ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb. Sep 9 05:42:35.051637 sshd[3855]: Connection closed by 10.0.0.1 port 60316 Sep 9 05:42:35.052909 sshd-session[3850]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:35.058470 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:60316.service: Deactivated successfully. Sep 9 05:42:35.060773 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:42:35.061813 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:42:35.063540 systemd-logind[1510]: Removed session 8. Sep 9 05:42:35.097227 containerd[1583]: time="2025-09-09T05:42:35.097095093Z" level=info msg="StartContainer for \"ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb\" returns successfully" Sep 9 05:42:35.113868 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 05:42:35.113987 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 05:42:35.314414 kubelet[2713]: I0909 05:42:35.314335 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bb656" podStartSLOduration=1.547893867 podStartE2EDuration="26.314305576s" podCreationTimestamp="2025-09-09 05:42:09 +0000 UTC" firstStartedPulling="2025-09-09 05:42:10.06147404 +0000 UTC m=+18.007120005" lastFinishedPulling="2025-09-09 05:42:34.827885749 +0000 UTC m=+42.773531714" observedRunningTime="2025-09-09 05:42:35.313626801 +0000 UTC m=+43.259272766" watchObservedRunningTime="2025-09-09 05:42:35.314305576 +0000 UTC m=+43.259951541" Sep 9 05:42:35.344738 kubelet[2713]: I0909 05:42:35.344671 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-backend-key-pair\") pod \"30dc47f2-f3b7-46db-bf49-78384d89673d\" (UID: \"30dc47f2-f3b7-46db-bf49-78384d89673d\") " Sep 9 05:42:35.344738 kubelet[2713]: I0909 05:42:35.344742 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjm7t\" (UniqueName: \"kubernetes.io/projected/30dc47f2-f3b7-46db-bf49-78384d89673d-kube-api-access-fjm7t\") pod \"30dc47f2-f3b7-46db-bf49-78384d89673d\" (UID: \"30dc47f2-f3b7-46db-bf49-78384d89673d\") " Sep 9 05:42:35.344934 kubelet[2713]: I0909 05:42:35.344770 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-ca-bundle\") pod \"30dc47f2-f3b7-46db-bf49-78384d89673d\" (UID: \"30dc47f2-f3b7-46db-bf49-78384d89673d\") " Sep 9 05:42:35.346560 kubelet[2713]: I0909 05:42:35.346306 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "30dc47f2-f3b7-46db-bf49-78384d89673d" (UID: "30dc47f2-f3b7-46db-bf49-78384d89673d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:42:35.351168 kubelet[2713]: I0909 05:42:35.351025 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "30dc47f2-f3b7-46db-bf49-78384d89673d" (UID: "30dc47f2-f3b7-46db-bf49-78384d89673d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:42:35.351877 kubelet[2713]: I0909 05:42:35.351827 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dc47f2-f3b7-46db-bf49-78384d89673d-kube-api-access-fjm7t" (OuterVolumeSpecName: "kube-api-access-fjm7t") pod "30dc47f2-f3b7-46db-bf49-78384d89673d" (UID: "30dc47f2-f3b7-46db-bf49-78384d89673d"). InnerVolumeSpecName "kube-api-access-fjm7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:42:35.446099 kubelet[2713]: I0909 05:42:35.446002 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 05:42:35.446099 kubelet[2713]: I0909 05:42:35.446054 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjm7t\" (UniqueName: \"kubernetes.io/projected/30dc47f2-f3b7-46db-bf49-78384d89673d-kube-api-access-fjm7t\") on node \"localhost\" DevicePath \"\"" Sep 9 05:42:35.446099 kubelet[2713]: I0909 05:42:35.446067 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30dc47f2-f3b7-46db-bf49-78384d89673d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 05:42:35.606345 systemd[1]: Removed slice kubepods-besteffort-pod30dc47f2_f3b7_46db_bf49_78384d89673d.slice - libcontainer container kubepods-besteffort-pod30dc47f2_f3b7_46db_bf49_78384d89673d.slice. Sep 9 05:42:35.834199 systemd[1]: var-lib-kubelet-pods-30dc47f2\x2df3b7\x2d46db\x2dbf49\x2d78384d89673d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjm7t.mount: Deactivated successfully. Sep 9 05:42:35.834349 systemd[1]: var-lib-kubelet-pods-30dc47f2\x2df3b7\x2d46db\x2dbf49\x2d78384d89673d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 05:42:35.852729 systemd[1]: Created slice kubepods-besteffort-pod637707a4_50b5_4653_a60b_db78fede4fca.slice - libcontainer container kubepods-besteffort-pod637707a4_50b5_4653_a60b_db78fede4fca.slice. Sep 9 05:42:35.949961 kubelet[2713]: I0909 05:42:35.949896 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/637707a4-50b5-4653-a60b-db78fede4fca-whisker-backend-key-pair\") pod \"whisker-5df97645dd-m42qp\" (UID: \"637707a4-50b5-4653-a60b-db78fede4fca\") " pod="calico-system/whisker-5df97645dd-m42qp" Sep 9 05:42:35.949961 kubelet[2713]: I0909 05:42:35.949956 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/637707a4-50b5-4653-a60b-db78fede4fca-whisker-ca-bundle\") pod \"whisker-5df97645dd-m42qp\" (UID: \"637707a4-50b5-4653-a60b-db78fede4fca\") " pod="calico-system/whisker-5df97645dd-m42qp" Sep 9 05:42:35.949961 kubelet[2713]: I0909 05:42:35.949972 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqq8\" (UniqueName: \"kubernetes.io/projected/637707a4-50b5-4653-a60b-db78fede4fca-kube-api-access-zhqq8\") pod \"whisker-5df97645dd-m42qp\" (UID: \"637707a4-50b5-4653-a60b-db78fede4fca\") " pod="calico-system/whisker-5df97645dd-m42qp" Sep 9 05:42:36.144730 kubelet[2713]: E0909 05:42:36.144664 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:36.145469 containerd[1583]: time="2025-09-09T05:42:36.145198627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6b655cfb-hwt2z,Uid:0845cde2-e85c-4079-a6ed-7f136bca6416,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:36.145469 containerd[1583]: time="2025-09-09T05:42:36.145213655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvh8r,Uid:f13a110c-4310-4f51-b85e-910b01fecb27,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:36.146859 kubelet[2713]: I0909 05:42:36.146810 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30dc47f2-f3b7-46db-bf49-78384d89673d" path="/var/lib/kubelet/pods/30dc47f2-f3b7-46db-bf49-78384d89673d/volumes" Sep 9 05:42:36.457133 containerd[1583]: time="2025-09-09T05:42:36.457080207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df97645dd-m42qp,Uid:637707a4-50b5-4653-a60b-db78fede4fca,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:37.145594 containerd[1583]: time="2025-09-09T05:42:37.145517359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2qzh,Uid:27062c35-7731-4470-8714-d5d5e0fdb01b,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:37.386789 systemd-networkd[1491]: calif2b44a3c4fb: Link UP Sep 9 05:42:37.387542 systemd-networkd[1491]: calif2b44a3c4fb: Gained carrier Sep 9 05:42:37.527256 containerd[1583]: 2025-09-09 05:42:36.221 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:42:37.527256 containerd[1583]: 2025-09-09 05:42:36.246 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0 calico-kube-controllers-5f6b655cfb- calico-system 0845cde2-e85c-4079-a6ed-7f136bca6416 899 0 2025-09-09 05:42:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f6b655cfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f6b655cfb-hwt2z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif2b44a3c4fb [] [] }} ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-" Sep 9 05:42:37.527256 containerd[1583]: 2025-09-09 05:42:36.246 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" Sep 9 05:42:37.527256 containerd[1583]: 2025-09-09 05:42:36.328 [INFO][3964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" HandleID="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Workload="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.329 [INFO][3964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" HandleID="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Workload="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000412380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f6b655cfb-hwt2z", "timestamp":"2025-09-09 05:42:36.328887902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.329 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.329 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.329 [INFO][3964] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.338 [INFO][3964] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" host="localhost" Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.343 [INFO][3964] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.347 [INFO][3964] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.348 [INFO][3964] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.350 [INFO][3964] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:37.527611 containerd[1583]: 2025-09-09 05:42:36.350 [INFO][3964] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" host="localhost" Sep 9 05:42:37.527872 containerd[1583]: 2025-09-09 05:42:36.351 [INFO][3964] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5 Sep 9 05:42:37.527872 containerd[1583]: 2025-09-09 05:42:37.076 [INFO][3964] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" host="localhost" Sep 9 05:42:37.527872 containerd[1583]: 2025-09-09 05:42:37.125 [INFO][3964] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" host="localhost" Sep 9 05:42:37.527872 containerd[1583]: 2025-09-09 05:42:37.126 [INFO][3964] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" host="localhost" Sep 9 05:42:37.527872 containerd[1583]: 2025-09-09 05:42:37.126 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:37.527872 containerd[1583]: 2025-09-09 05:42:37.126 [INFO][3964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" HandleID="k8s-pod-network.d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Workload="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" Sep 9 05:42:37.531040 containerd[1583]: 2025-09-09 05:42:37.131 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0", GenerateName:"calico-kube-controllers-5f6b655cfb-", Namespace:"calico-system", SelfLink:"", UID:"0845cde2-e85c-4079-a6ed-7f136bca6416", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f6b655cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f6b655cfb-hwt2z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2b44a3c4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.531283 containerd[1583]: 2025-09-09 05:42:37.373 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" Sep 9 05:42:37.531283 containerd[1583]: 2025-09-09 05:42:37.373 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2b44a3c4fb ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" Sep 9 05:42:37.531283 containerd[1583]: 2025-09-09 05:42:37.388 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" Sep 9 05:42:37.531452 containerd[1583]: 2025-09-09 05:42:37.388 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0", GenerateName:"calico-kube-controllers-5f6b655cfb-", Namespace:"calico-system", SelfLink:"", UID:"0845cde2-e85c-4079-a6ed-7f136bca6416", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f6b655cfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5", Pod:"calico-kube-controllers-5f6b655cfb-hwt2z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif2b44a3c4fb", MAC:"42:a6:a4:40:f9:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.531673 containerd[1583]: 2025-09-09 05:42:37.523 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" Namespace="calico-system" Pod="calico-kube-controllers-5f6b655cfb-hwt2z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f6b655cfb--hwt2z-eth0" Sep 9 05:42:37.603014 systemd-networkd[1491]: calia5582ab84d5: Link UP Sep 9 05:42:37.605063 systemd-networkd[1491]: calia5582ab84d5: Gained carrier Sep 9 05:42:37.628519 containerd[1583]: 2025-09-09 05:42:36.224 [INFO][3948] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:42:37.628519 containerd[1583]: 2025-09-09 05:42:36.246 [INFO][3948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0 coredns-674b8bbfcf- kube-system f13a110c-4310-4f51-b85e-910b01fecb27 896 0 2025-09-09 05:41:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dvh8r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5582ab84d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-" Sep 9 05:42:37.628519 containerd[1583]: 2025-09-09 05:42:36.246 [INFO][3948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" Sep 9 05:42:37.628519 containerd[1583]: 2025-09-09 05:42:36.328 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" HandleID="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Workload="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:36.329 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" HandleID="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Workload="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002897e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dvh8r", "timestamp":"2025-09-09 05:42:36.328746226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:36.329 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.126 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.127 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.523 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" host="localhost" Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.534 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.540 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.543 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.545 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:37.628952 containerd[1583]: 2025-09-09 05:42:37.545 [INFO][3966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" host="localhost" Sep 9 05:42:37.629227 containerd[1583]: 2025-09-09 05:42:37.547 [INFO][3966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578 Sep 9 05:42:37.629227 containerd[1583]: 2025-09-09 05:42:37.579 [INFO][3966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" host="localhost" Sep 9 05:42:37.629227 containerd[1583]: 2025-09-09 05:42:37.588 [INFO][3966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" host="localhost" Sep 9 05:42:37.629227 containerd[1583]: 2025-09-09 05:42:37.589 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" host="localhost" Sep 9 05:42:37.629227 containerd[1583]: 2025-09-09 05:42:37.589 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:37.629227 containerd[1583]: 2025-09-09 05:42:37.589 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" HandleID="k8s-pod-network.524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Workload="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" Sep 9 05:42:37.629349 containerd[1583]: 2025-09-09 05:42:37.595 [INFO][3948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f13a110c-4310-4f51-b85e-910b01fecb27", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dvh8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5582ab84d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.629422 containerd[1583]: 2025-09-09 05:42:37.595 [INFO][3948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" Sep 9 05:42:37.629422 containerd[1583]: 2025-09-09 05:42:37.595 [INFO][3948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5582ab84d5 ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" Sep 9 05:42:37.629422 containerd[1583]: 2025-09-09 05:42:37.604 [INFO][3948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" Sep 9 05:42:37.629493 containerd[1583]: 2025-09-09 05:42:37.605 [INFO][3948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f13a110c-4310-4f51-b85e-910b01fecb27", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578", Pod:"coredns-674b8bbfcf-dvh8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5582ab84d5", MAC:"ea:70:bb:75:f1:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.629493 containerd[1583]: 2025-09-09 05:42:37.622 [INFO][3948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvh8r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvh8r-eth0" Sep 9 05:42:37.790763 containerd[1583]: time="2025-09-09T05:42:37.789443433Z" level=info msg="connecting to shim 524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578" address="unix:///run/containerd/s/595bf9ae79794396f113016a63c1ffac120be334a0e3a9ef24b64437a2a6d762" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:37.791154 containerd[1583]: time="2025-09-09T05:42:37.791126744Z" level=info msg="connecting to shim d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5" address="unix:///run/containerd/s/a5a66ae12465ff6e4ac8d6867b45aabffba2637ef1bfe1fa16e582f28a79ee18" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:37.847016 systemd[1]: Started cri-containerd-524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578.scope - libcontainer container 524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578. Sep 9 05:42:37.853479 systemd[1]: Started cri-containerd-d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5.scope - libcontainer container d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5. Sep 9 05:42:37.871657 systemd-networkd[1491]: calic46b5677ddb: Link UP Sep 9 05:42:37.874365 systemd-networkd[1491]: calic46b5677ddb: Gained carrier Sep 9 05:42:37.877038 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.716 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5df97645dd--m42qp-eth0 whisker-5df97645dd- calico-system 637707a4-50b5-4653-a60b-db78fede4fca 1018 0 2025-09-09 05:42:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5df97645dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5df97645dd-m42qp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic46b5677ddb [] [] }} ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.719 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-eth0" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.782 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" HandleID="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Workload="localhost-k8s-whisker--5df97645dd--m42qp-eth0" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.782 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" HandleID="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Workload="localhost-k8s-whisker--5df97645dd--m42qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d5940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5df97645dd-m42qp", "timestamp":"2025-09-09 05:42:37.781997587 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.782 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.782 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.782 [INFO][4170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.810 [INFO][4170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.817 [INFO][4170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.824 [INFO][4170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.826 [INFO][4170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.832 [INFO][4170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.834 [INFO][4170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.836 [INFO][4170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.848 [INFO][4170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.859 [INFO][4170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.859 [INFO][4170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" host="localhost" Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.859 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:37.906142 containerd[1583]: 2025-09-09 05:42:37.859 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" HandleID="k8s-pod-network.bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Workload="localhost-k8s-whisker--5df97645dd--m42qp-eth0" Sep 9 05:42:37.907264 containerd[1583]: 2025-09-09 05:42:37.864 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5df97645dd--m42qp-eth0", GenerateName:"whisker-5df97645dd-", Namespace:"calico-system", SelfLink:"", UID:"637707a4-50b5-4653-a60b-db78fede4fca", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df97645dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5df97645dd-m42qp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic46b5677ddb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.907264 containerd[1583]: 2025-09-09 05:42:37.864 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-eth0" Sep 9 05:42:37.907264 containerd[1583]: 2025-09-09 05:42:37.864 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic46b5677ddb ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-eth0" Sep 9 05:42:37.907264 containerd[1583]: 2025-09-09 05:42:37.879 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-eth0" Sep 9 05:42:37.907264 containerd[1583]: 2025-09-09 05:42:37.881 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5df97645dd--m42qp-eth0", GenerateName:"whisker-5df97645dd-", Namespace:"calico-system", SelfLink:"", UID:"637707a4-50b5-4653-a60b-db78fede4fca", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df97645dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e", Pod:"whisker-5df97645dd-m42qp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic46b5677ddb", MAC:"da:a7:1f:ad:b9:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:37.907264 containerd[1583]: 2025-09-09 05:42:37.898 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" Namespace="calico-system" Pod="whisker-5df97645dd-m42qp" WorkloadEndpoint="localhost-k8s-whisker--5df97645dd--m42qp-eth0" Sep 9 05:42:37.919617 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:37.951422 containerd[1583]: time="2025-09-09T05:42:37.951364227Z" level=info msg="connecting to shim bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e" address="unix:///run/containerd/s/770424b5b469b5d54a5d0f21cbbc1762f435fca8bfedb10dc9ed84affedad6cc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:37.976534 containerd[1583]: time="2025-09-09T05:42:37.976444315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvh8r,Uid:f13a110c-4310-4f51-b85e-910b01fecb27,Namespace:kube-system,Attempt:0,} returns sandbox id \"524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578\"" Sep 9 05:42:37.977792 kubelet[2713]: E0909 05:42:37.977771 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:37.985885 containerd[1583]: time="2025-09-09T05:42:37.985748952Z" level=info msg="CreateContainer within sandbox \"524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:42:38.003592 containerd[1583]: time="2025-09-09T05:42:38.003303204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6b655cfb-hwt2z,Uid:0845cde2-e85c-4079-a6ed-7f136bca6416,Namespace:calico-system,Attempt:0,} returns sandbox id \"d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5\"" Sep 9 05:42:38.005530 containerd[1583]: time="2025-09-09T05:42:38.005484642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 05:42:38.005889 systemd[1]: Started cri-containerd-bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e.scope - libcontainer container bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e. Sep 9 05:42:38.010598 systemd-networkd[1491]: cali57a44be3448: Link UP Sep 9 05:42:38.013316 systemd-networkd[1491]: cali57a44be3448: Gained carrier Sep 9 05:42:38.025663 containerd[1583]: time="2025-09-09T05:42:38.025024540Z" level=info msg="Container f653583fb3f934771762de3664a4c7c5b54f112e680bd7b9396b7fe70c983e5e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:38.033823 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.712 [INFO][4124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--l2qzh-eth0 csi-node-driver- calico-system 27062c35-7731-4470-8714-d5d5e0fdb01b 767 0 2025-09-09 05:42:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-l2qzh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali57a44be3448 [] [] }} ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.712 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-eth0" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.790 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" HandleID="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Workload="localhost-k8s-csi--node--driver--l2qzh-eth0" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.790 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" HandleID="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Workload="localhost-k8s-csi--node--driver--l2qzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bed10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-l2qzh", "timestamp":"2025-09-09 05:42:37.790049701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.790 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.859 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.859 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.904 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.922 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.932 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.936 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.940 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.940 [INFO][4157] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.943 [INFO][4157] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.954 [INFO][4157] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.984 [INFO][4157] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.987 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" host="localhost" Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.988 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:38.034137 containerd[1583]: 2025-09-09 05:42:37.988 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" HandleID="k8s-pod-network.bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Workload="localhost-k8s-csi--node--driver--l2qzh-eth0" Sep 9 05:42:38.034940 containerd[1583]: 2025-09-09 05:42:38.002 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l2qzh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27062c35-7731-4470-8714-d5d5e0fdb01b", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-l2qzh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a44be3448", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.034940 containerd[1583]: 2025-09-09 05:42:38.003 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-eth0" Sep 9 05:42:38.034940 containerd[1583]: 2025-09-09 05:42:38.003 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57a44be3448 ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-eth0" Sep 9 05:42:38.034940 containerd[1583]: 2025-09-09 05:42:38.015 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-eth0" Sep 9 05:42:38.034940 containerd[1583]: 2025-09-09 05:42:38.016 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--l2qzh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27062c35-7731-4470-8714-d5d5e0fdb01b", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff", Pod:"csi-node-driver-l2qzh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali57a44be3448", MAC:"8a:49:d4:90:ca:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:38.034940 containerd[1583]: 2025-09-09 05:42:38.030 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" Namespace="calico-system" Pod="csi-node-driver-l2qzh" WorkloadEndpoint="localhost-k8s-csi--node--driver--l2qzh-eth0" Sep 9 05:42:38.035898 containerd[1583]: time="2025-09-09T05:42:38.035850332Z" level=info msg="CreateContainer within sandbox \"524e25b3f14586d16718e5902e40f3d690c16762e8a108a8c6be4b64b40c2578\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f653583fb3f934771762de3664a4c7c5b54f112e680bd7b9396b7fe70c983e5e\"" Sep 9 05:42:38.038771 containerd[1583]: time="2025-09-09T05:42:38.036763948Z" level=info msg="StartContainer for \"f653583fb3f934771762de3664a4c7c5b54f112e680bd7b9396b7fe70c983e5e\"" Sep 9 05:42:38.038771 containerd[1583]: time="2025-09-09T05:42:38.038188483Z" level=info msg="connecting to shim f653583fb3f934771762de3664a4c7c5b54f112e680bd7b9396b7fe70c983e5e" address="unix:///run/containerd/s/595bf9ae79794396f113016a63c1ffac120be334a0e3a9ef24b64437a2a6d762" protocol=ttrpc version=3 Sep 9 05:42:38.061425 systemd[1]: Started cri-containerd-f653583fb3f934771762de3664a4c7c5b54f112e680bd7b9396b7fe70c983e5e.scope - libcontainer container f653583fb3f934771762de3664a4c7c5b54f112e680bd7b9396b7fe70c983e5e. Sep 9 05:42:38.091685 containerd[1583]: time="2025-09-09T05:42:38.091579739Z" level=info msg="connecting to shim bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff" address="unix:///run/containerd/s/06c6500fe74ca0b53c2726fa07c82feb43b3cf0af7a7157c45aab90c9600e62a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:38.093849 containerd[1583]: time="2025-09-09T05:42:38.093176999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df97645dd-m42qp,Uid:637707a4-50b5-4653-a60b-db78fede4fca,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e\"" Sep 9 05:42:38.132944 systemd[1]: Started cri-containerd-bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff.scope - libcontainer container bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff. Sep 9 05:42:38.146419 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:38.239348 systemd-networkd[1491]: vxlan.calico: Link UP Sep 9 05:42:38.239586 systemd-networkd[1491]: vxlan.calico: Gained carrier Sep 9 05:42:38.294520 containerd[1583]: time="2025-09-09T05:42:38.294460948Z" level=info msg="StartContainer for \"f653583fb3f934771762de3664a4c7c5b54f112e680bd7b9396b7fe70c983e5e\" returns successfully" Sep 9 05:42:38.314840 kubelet[2713]: E0909 05:42:38.314287 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:38.430901 systemd-networkd[1491]: calif2b44a3c4fb: Gained IPv6LL Sep 9 05:42:38.475419 containerd[1583]: time="2025-09-09T05:42:38.475384707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l2qzh,Uid:27062c35-7731-4470-8714-d5d5e0fdb01b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff\"" Sep 9 05:42:38.780857 kubelet[2713]: I0909 05:42:38.780791 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dvh8r" podStartSLOduration=41.780773999 podStartE2EDuration="41.780773999s" podCreationTimestamp="2025-09-09 05:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:38.780495095 +0000 UTC m=+46.726141060" watchObservedRunningTime="2025-09-09 05:42:38.780773999 +0000 UTC m=+46.726419964" Sep 9 05:42:38.942966 systemd-networkd[1491]: calic46b5677ddb: Gained IPv6LL Sep 9 05:42:39.145396 containerd[1583]: time="2025-09-09T05:42:39.145213544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-l7lvk,Uid:136f3715-9e30-4afc-89b7-d12a62b46ac0,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:39.261228 systemd-networkd[1491]: cali8bb5090c365: Link UP Sep 9 05:42:39.261474 systemd-networkd[1491]: cali8bb5090c365: Gained carrier Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.183 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0 calico-apiserver-649ff699f7- calico-apiserver 136f3715-9e30-4afc-89b7-d12a62b46ac0 903 0 2025-09-09 05:42:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649ff699f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-649ff699f7-l7lvk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8bb5090c365 [] [] }} ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.183 [INFO][4487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.211 [INFO][4501] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" HandleID="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Workload="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.211 [INFO][4501] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" HandleID="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Workload="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-649ff699f7-l7lvk", "timestamp":"2025-09-09 05:42:39.21129841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.211 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.211 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.211 [INFO][4501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.218 [INFO][4501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.223 [INFO][4501] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.227 [INFO][4501] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.229 [INFO][4501] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.231 [INFO][4501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.232 [INFO][4501] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.235 [INFO][4501] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.247 [INFO][4501] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.254 [INFO][4501] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.255 [INFO][4501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" host="localhost" Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.255 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:39.277047 containerd[1583]: 2025-09-09 05:42:39.255 [INFO][4501] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" HandleID="k8s-pod-network.9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Workload="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" Sep 9 05:42:39.278068 containerd[1583]: 2025-09-09 05:42:39.258 [INFO][4487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0", GenerateName:"calico-apiserver-649ff699f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"136f3715-9e30-4afc-89b7-d12a62b46ac0", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649ff699f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-649ff699f7-l7lvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bb5090c365", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:39.278068 containerd[1583]: 2025-09-09 05:42:39.258 [INFO][4487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" Sep 9 05:42:39.278068 containerd[1583]: 2025-09-09 05:42:39.258 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bb5090c365 ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" Sep 9 05:42:39.278068 containerd[1583]: 2025-09-09 05:42:39.261 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" Sep 9 05:42:39.278068 containerd[1583]: 2025-09-09 05:42:39.263 [INFO][4487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0", GenerateName:"calico-apiserver-649ff699f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"136f3715-9e30-4afc-89b7-d12a62b46ac0", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649ff699f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda", Pod:"calico-apiserver-649ff699f7-l7lvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8bb5090c365", MAC:"12:5c:70:ee:69:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:39.278068 containerd[1583]: 2025-09-09 05:42:39.272 [INFO][4487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-l7lvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--l7lvk-eth0" Sep 9 05:42:39.315610 kubelet[2713]: E0909 05:42:39.315579 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:39.324646 containerd[1583]: time="2025-09-09T05:42:39.324578659Z" level=info msg="connecting to shim 9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda" address="unix:///run/containerd/s/ada9d31e90268ab1d118f4eb792ebfb77c2b1670e4aa271f12f52426bdd06196" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:39.364118 systemd[1]: Started cri-containerd-9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda.scope - libcontainer container 9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda. Sep 9 05:42:39.413496 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:39.451328 containerd[1583]: time="2025-09-09T05:42:39.451279010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-l7lvk,Uid:136f3715-9e30-4afc-89b7-d12a62b46ac0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda\"" Sep 9 05:42:39.456000 systemd-networkd[1491]: calia5582ab84d5: Gained IPv6LL Sep 9 05:42:39.457432 systemd-networkd[1491]: vxlan.calico: Gained IPv6LL Sep 9 05:42:40.031402 systemd-networkd[1491]: cali57a44be3448: Gained IPv6LL Sep 9 05:42:40.070422 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:39978.service - OpenSSH per-connection server daemon (10.0.0.1:39978). Sep 9 05:42:40.146956 kubelet[2713]: E0909 05:42:40.146919 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:40.148800 containerd[1583]: time="2025-09-09T05:42:40.148526026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-lr4gx,Uid:7e528bc2-eb93-4f51-aa36-c14d71a5e78c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:42:40.148800 containerd[1583]: time="2025-09-09T05:42:40.148653687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6vtv8,Uid:b4934adc-622b-464a-865d-a0a030e69140,Namespace:calico-system,Attempt:0,}" Sep 9 05:42:40.148800 containerd[1583]: time="2025-09-09T05:42:40.148694844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ksb48,Uid:39fc3c9d-1597-46bb-be3b-930a1925c3d7,Namespace:kube-system,Attempt:0,}" Sep 9 05:42:40.150889 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 39978 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:42:40.152917 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:42:40.175225 systemd-logind[1510]: New session 9 of user core. Sep 9 05:42:40.182006 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:42:40.327756 kubelet[2713]: E0909 05:42:40.327150 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:40.340752 systemd-networkd[1491]: cali3cc5277a3b5: Link UP Sep 9 05:42:40.341039 systemd-networkd[1491]: cali3cc5277a3b5: Gained carrier Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.207 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0 calico-apiserver-649ff699f7- calico-apiserver 7e528bc2-eb93-4f51-aa36-c14d71a5e78c 902 0 2025-09-09 05:42:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:649ff699f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-649ff699f7-lr4gx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3cc5277a3b5 [] [] }} ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.207 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.252 [INFO][4615] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" HandleID="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Workload="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.252 [INFO][4615] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" HandleID="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Workload="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-649ff699f7-lr4gx", "timestamp":"2025-09-09 05:42:40.252634265 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.253 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.253 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.253 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.265 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.272 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.278 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.283 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.286 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.287 [INFO][4615] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.292 [INFO][4615] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344 Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.305 [INFO][4615] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.318 [INFO][4615] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.318 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" host="localhost" Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.320 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:40.363215 containerd[1583]: 2025-09-09 05:42:40.320 [INFO][4615] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" HandleID="k8s-pod-network.941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Workload="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" Sep 9 05:42:40.364565 containerd[1583]: 2025-09-09 05:42:40.328 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0", GenerateName:"calico-apiserver-649ff699f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e528bc2-eb93-4f51-aa36-c14d71a5e78c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649ff699f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-649ff699f7-lr4gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc5277a3b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:40.364565 containerd[1583]: 2025-09-09 05:42:40.328 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" Sep 9 05:42:40.364565 containerd[1583]: 2025-09-09 05:42:40.328 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cc5277a3b5 ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" Sep 9 05:42:40.364565 containerd[1583]: 2025-09-09 05:42:40.342 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" Sep 9 05:42:40.364565 containerd[1583]: 2025-09-09 05:42:40.343 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0", GenerateName:"calico-apiserver-649ff699f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e528bc2-eb93-4f51-aa36-c14d71a5e78c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"649ff699f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344", Pod:"calico-apiserver-649ff699f7-lr4gx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cc5277a3b5", MAC:"a2:f9:c0:1e:bb:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:40.364565 containerd[1583]: 2025-09-09 05:42:40.358 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" Namespace="calico-apiserver" Pod="calico-apiserver-649ff699f7-lr4gx" WorkloadEndpoint="localhost-k8s-calico--apiserver--649ff699f7--lr4gx-eth0" Sep 9 05:42:40.397524 sshd[4611]: Connection closed by 10.0.0.1 port 39978 Sep 9 05:42:40.398521 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:40.406347 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:39978.service: Deactivated successfully. Sep 9 05:42:40.410729 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:42:40.413533 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:42:40.426700 systemd-logind[1510]: Removed session 9. Sep 9 05:42:40.430664 containerd[1583]: time="2025-09-09T05:42:40.430619003Z" level=info msg="connecting to shim 941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344" address="unix:///run/containerd/s/a7c8f6cfd146003ebbe879bfa2225a73aec8478ef708528ae538f9d58c00bba8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:40.467969 systemd[1]: Started cri-containerd-941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344.scope - libcontainer container 941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344. Sep 9 05:42:40.479096 systemd-networkd[1491]: cali8bb5090c365: Gained IPv6LL Sep 9 05:42:40.484088 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:40.490898 systemd-networkd[1491]: cali9e06bc835e8: Link UP Sep 9 05:42:40.491823 systemd-networkd[1491]: cali9e06bc835e8: Gained carrier Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.265 [INFO][4579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--ksb48-eth0 coredns-674b8bbfcf- kube-system 39fc3c9d-1597-46bb-be3b-930a1925c3d7 893 0 2025-09-09 05:41:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-ksb48 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e06bc835e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.266 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.322 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" HandleID="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Workload="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.322 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" HandleID="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Workload="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050ae60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-ksb48", "timestamp":"2025-09-09 05:42:40.322319864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.322 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.322 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.322 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.367 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.391 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.399 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.405 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.417 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.417 [INFO][4643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.424 [INFO][4643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48 Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.441 [INFO][4643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.481 [INFO][4643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.481 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" host="localhost" Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.481 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:40.523375 containerd[1583]: 2025-09-09 05:42:40.481 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" HandleID="k8s-pod-network.7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Workload="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" Sep 9 05:42:40.525221 containerd[1583]: 2025-09-09 05:42:40.485 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ksb48-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39fc3c9d-1597-46bb-be3b-930a1925c3d7", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-ksb48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e06bc835e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:40.525221 containerd[1583]: 2025-09-09 05:42:40.485 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" Sep 9 05:42:40.525221 containerd[1583]: 2025-09-09 05:42:40.485 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e06bc835e8 ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" Sep 9 05:42:40.525221 containerd[1583]: 2025-09-09 05:42:40.493 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" Sep 9 05:42:40.525221 containerd[1583]: 2025-09-09 05:42:40.497 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--ksb48-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39fc3c9d-1597-46bb-be3b-930a1925c3d7", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 41, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48", Pod:"coredns-674b8bbfcf-ksb48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e06bc835e8", MAC:"b6:90:b4:16:97:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:40.525221 containerd[1583]: 2025-09-09 05:42:40.515 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" Namespace="kube-system" Pod="coredns-674b8bbfcf-ksb48" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--ksb48-eth0" Sep 9 05:42:40.719350 systemd-networkd[1491]: cali2b91f79bd25: Link UP Sep 9 05:42:40.720227 systemd-networkd[1491]: cali2b91f79bd25: Gained carrier Sep 9 05:42:40.740253 containerd[1583]: time="2025-09-09T05:42:40.740070403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-649ff699f7-lr4gx,Uid:7e528bc2-eb93-4f51-aa36-c14d71a5e78c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344\"" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.233 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--6vtv8-eth0 goldmane-54d579b49d- calico-system b4934adc-622b-464a-865d-a0a030e69140 901 0 2025-09-09 05:42:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-6vtv8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2b91f79bd25 [] [] }} ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.234 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.335 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" HandleID="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Workload="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.335 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" HandleID="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Workload="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000482b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-6vtv8", "timestamp":"2025-09-09 05:42:40.335125461 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.335 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.481 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.481 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.490 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.497 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.514 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.519 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.523 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.524 [INFO][4630] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.526 [INFO][4630] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401 Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.643 [INFO][4630] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.712 [INFO][4630] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.712 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" host="localhost" Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.712 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:42:40.750089 containerd[1583]: 2025-09-09 05:42:40.712 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" HandleID="k8s-pod-network.49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Workload="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" Sep 9 05:42:40.750780 containerd[1583]: 2025-09-09 05:42:40.716 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--6vtv8-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b4934adc-622b-464a-865d-a0a030e69140", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-6vtv8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b91f79bd25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:40.750780 containerd[1583]: 2025-09-09 05:42:40.716 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" Sep 9 05:42:40.750780 containerd[1583]: 2025-09-09 05:42:40.716 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b91f79bd25 ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" Sep 9 05:42:40.750780 containerd[1583]: 2025-09-09 05:42:40.720 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" Sep 9 05:42:40.750780 containerd[1583]: 2025-09-09 05:42:40.720 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--6vtv8-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b4934adc-622b-464a-865d-a0a030e69140", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 42, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401", Pod:"goldmane-54d579b49d-6vtv8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b91f79bd25", MAC:"8a:34:0e:ef:b2:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:42:40.750780 containerd[1583]: 2025-09-09 05:42:40.745 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" Namespace="calico-system" Pod="goldmane-54d579b49d-6vtv8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--6vtv8-eth0" Sep 9 05:42:40.806488 containerd[1583]: time="2025-09-09T05:42:40.806436038Z" level=info msg="connecting to shim 7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48" address="unix:///run/containerd/s/873b480a2f11c6cb9ba9d8c0c9fc42a934679efc770ff6343de470d59cf0953f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:40.818763 containerd[1583]: time="2025-09-09T05:42:40.818542381Z" level=info msg="connecting to shim 49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401" address="unix:///run/containerd/s/e3ae8ee85afc1ad2074af49af8eb887ec8a5db30cdf4bcc8671caef2ba4cdbe5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:42:40.839269 systemd[1]: Started cri-containerd-7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48.scope - libcontainer container 7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48. Sep 9 05:42:40.856977 systemd[1]: Started cri-containerd-49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401.scope - libcontainer container 49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401. Sep 9 05:42:40.864360 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:40.890864 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:42:40.905071 containerd[1583]: time="2025-09-09T05:42:40.905004403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ksb48,Uid:39fc3c9d-1597-46bb-be3b-930a1925c3d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48\"" Sep 9 05:42:40.906386 kubelet[2713]: E0909 05:42:40.906338 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:40.916032 containerd[1583]: time="2025-09-09T05:42:40.915969103Z" level=info msg="CreateContainer within sandbox \"7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:42:40.929271 containerd[1583]: time="2025-09-09T05:42:40.928817851Z" level=info msg="Container 19e06242c8d7c027834a6687cf64dd2d3df80d4f7573cbf39581ba957537010f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:40.941496 containerd[1583]: time="2025-09-09T05:42:40.941160859Z" level=info msg="CreateContainer within sandbox \"7ffc034d3572989844e2366b277f3d88cad6ee94ad30da3cdd81b24c97bd8d48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19e06242c8d7c027834a6687cf64dd2d3df80d4f7573cbf39581ba957537010f\"" Sep 9 05:42:40.942276 containerd[1583]: time="2025-09-09T05:42:40.942231910Z" level=info msg="StartContainer for \"19e06242c8d7c027834a6687cf64dd2d3df80d4f7573cbf39581ba957537010f\"" Sep 9 05:42:40.943742 containerd[1583]: time="2025-09-09T05:42:40.943440721Z" level=info msg="connecting to shim 19e06242c8d7c027834a6687cf64dd2d3df80d4f7573cbf39581ba957537010f" address="unix:///run/containerd/s/873b480a2f11c6cb9ba9d8c0c9fc42a934679efc770ff6343de470d59cf0953f" protocol=ttrpc version=3 Sep 9 05:42:40.947649 containerd[1583]: time="2025-09-09T05:42:40.947616673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6vtv8,Uid:b4934adc-622b-464a-865d-a0a030e69140,Namespace:calico-system,Attempt:0,} returns sandbox id \"49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401\"" Sep 9 05:42:40.968123 systemd[1]: Started cri-containerd-19e06242c8d7c027834a6687cf64dd2d3df80d4f7573cbf39581ba957537010f.scope - libcontainer container 19e06242c8d7c027834a6687cf64dd2d3df80d4f7573cbf39581ba957537010f. Sep 9 05:42:41.128491 containerd[1583]: time="2025-09-09T05:42:41.128329205Z" level=info msg="StartContainer for \"19e06242c8d7c027834a6687cf64dd2d3df80d4f7573cbf39581ba957537010f\" returns successfully" Sep 9 05:42:41.334145 kubelet[2713]: E0909 05:42:41.333621 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:41.335762 kubelet[2713]: E0909 05:42:41.334244 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:41.649736 kubelet[2713]: I0909 05:42:41.649587 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ksb48" podStartSLOduration=44.649567579 podStartE2EDuration="44.649567579s" podCreationTimestamp="2025-09-09 05:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:42:41.649220688 +0000 UTC m=+49.594866773" watchObservedRunningTime="2025-09-09 05:42:41.649567579 +0000 UTC m=+49.595213544" Sep 9 05:42:41.694909 systemd-networkd[1491]: cali3cc5277a3b5: Gained IPv6LL Sep 9 05:42:42.080102 systemd-networkd[1491]: cali2b91f79bd25: Gained IPv6LL Sep 9 05:42:42.246536 containerd[1583]: time="2025-09-09T05:42:42.246455110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:42.257840 containerd[1583]: time="2025-09-09T05:42:42.257780253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 05:42:42.285939 containerd[1583]: time="2025-09-09T05:42:42.285852842Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:42.291922 containerd[1583]: time="2025-09-09T05:42:42.291835415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:42.292538 containerd[1583]: time="2025-09-09T05:42:42.292452714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.286905054s" Sep 9 05:42:42.292538 containerd[1583]: time="2025-09-09T05:42:42.292483542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 05:42:42.297203 containerd[1583]: time="2025-09-09T05:42:42.296735345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 05:42:42.369286 containerd[1583]: time="2025-09-09T05:42:42.369142556Z" level=info msg="CreateContainer within sandbox \"d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 05:42:42.373207 kubelet[2713]: E0909 05:42:42.373171 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:42.430980 containerd[1583]: time="2025-09-09T05:42:42.430910921Z" level=info msg="Container 3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:42.447577 containerd[1583]: time="2025-09-09T05:42:42.447528642Z" level=info msg="CreateContainer within sandbox \"d44d4c424bc49dc53453425213b83aa6b952ed43048f0ed96cd70cf5a923f4f5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6\"" Sep 9 05:42:42.448471 containerd[1583]: time="2025-09-09T05:42:42.448431277Z" level=info msg="StartContainer for \"3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6\"" Sep 9 05:42:42.449778 containerd[1583]: time="2025-09-09T05:42:42.449735275Z" level=info msg="connecting to shim 3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6" address="unix:///run/containerd/s/a5a66ae12465ff6e4ac8d6867b45aabffba2637ef1bfe1fa16e582f28a79ee18" protocol=ttrpc version=3 Sep 9 05:42:42.465769 systemd-networkd[1491]: cali9e06bc835e8: Gained IPv6LL Sep 9 05:42:42.473003 systemd[1]: Started cri-containerd-3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6.scope - libcontainer container 3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6. Sep 9 05:42:42.542147 containerd[1583]: time="2025-09-09T05:42:42.542068819Z" level=info msg="StartContainer for \"3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6\" returns successfully" Sep 9 05:42:43.339032 kubelet[2713]: E0909 05:42:43.338960 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:42:43.348531 kubelet[2713]: I0909 05:42:43.348476 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f6b655cfb-hwt2z" podStartSLOduration=30.05904651 podStartE2EDuration="34.348461117s" podCreationTimestamp="2025-09-09 05:42:09 +0000 UTC" firstStartedPulling="2025-09-09 05:42:38.005000943 +0000 UTC m=+45.950646908" lastFinishedPulling="2025-09-09 05:42:42.294415549 +0000 UTC m=+50.240061515" observedRunningTime="2025-09-09 05:42:43.348072268 +0000 UTC m=+51.293718223" watchObservedRunningTime="2025-09-09 05:42:43.348461117 +0000 UTC m=+51.294107082" Sep 9 05:42:43.811321 containerd[1583]: time="2025-09-09T05:42:43.811241373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:43.812274 containerd[1583]: time="2025-09-09T05:42:43.812238154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 05:42:43.813522 containerd[1583]: time="2025-09-09T05:42:43.813484293Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:43.815753 containerd[1583]: time="2025-09-09T05:42:43.815721313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:43.816523 containerd[1583]: time="2025-09-09T05:42:43.816492922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.519704798s" Sep 9 05:42:43.816583 containerd[1583]: time="2025-09-09T05:42:43.816525223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 05:42:43.817536 containerd[1583]: time="2025-09-09T05:42:43.817368786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 05:42:43.821663 containerd[1583]: time="2025-09-09T05:42:43.821610440Z" level=info msg="CreateContainer within sandbox \"bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 05:42:43.830742 containerd[1583]: time="2025-09-09T05:42:43.830664018Z" level=info msg="Container e86f70d664635f0c610ba1afacb23c4c917d884098b180a287c66aaa386e3213: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:43.845434 containerd[1583]: time="2025-09-09T05:42:43.845372803Z" level=info msg="CreateContainer within sandbox \"bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e86f70d664635f0c610ba1afacb23c4c917d884098b180a287c66aaa386e3213\"" Sep 9 05:42:43.846129 containerd[1583]: time="2025-09-09T05:42:43.846098796Z" level=info msg="StartContainer for \"e86f70d664635f0c610ba1afacb23c4c917d884098b180a287c66aaa386e3213\"" Sep 9 05:42:43.849114 containerd[1583]: time="2025-09-09T05:42:43.849063342Z" level=info msg="connecting to shim e86f70d664635f0c610ba1afacb23c4c917d884098b180a287c66aaa386e3213" address="unix:///run/containerd/s/770424b5b469b5d54a5d0f21cbbc1762f435fca8bfedb10dc9ed84affedad6cc" protocol=ttrpc version=3 Sep 9 05:42:43.874981 systemd[1]: Started cri-containerd-e86f70d664635f0c610ba1afacb23c4c917d884098b180a287c66aaa386e3213.scope - libcontainer container e86f70d664635f0c610ba1afacb23c4c917d884098b180a287c66aaa386e3213. Sep 9 05:42:43.931012 containerd[1583]: time="2025-09-09T05:42:43.930957366Z" level=info msg="StartContainer for \"e86f70d664635f0c610ba1afacb23c4c917d884098b180a287c66aaa386e3213\" returns successfully" Sep 9 05:42:44.431679 containerd[1583]: time="2025-09-09T05:42:44.431629844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6\" id:\"9cb6b4c7ee69f489ff4f1301caa82427732099e88d66b5665b835ae41093e826\" pid:4968 exited_at:{seconds:1757396564 nanos:431266071}" Sep 9 05:42:45.418345 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:39992.service - OpenSSH per-connection server daemon (10.0.0.1:39992). Sep 9 05:42:45.572574 sshd[4980]: Accepted publickey for core from 10.0.0.1 port 39992 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:42:45.574905 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:42:45.581592 systemd-logind[1510]: New session 10 of user core. Sep 9 05:42:45.585936 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:42:45.851828 containerd[1583]: time="2025-09-09T05:42:45.851758957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:45.852768 containerd[1583]: time="2025-09-09T05:42:45.852740860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 05:42:45.855088 containerd[1583]: time="2025-09-09T05:42:45.855006594Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:45.855489 sshd[4987]: Connection closed by 10.0.0.1 port 39992 Sep 9 05:42:45.855966 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:45.857563 containerd[1583]: time="2025-09-09T05:42:45.857533427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:45.858420 containerd[1583]: time="2025-09-09T05:42:45.858256846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.040863484s" Sep 9 05:42:45.858420 containerd[1583]: time="2025-09-09T05:42:45.858292032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 05:42:45.859541 containerd[1583]: time="2025-09-09T05:42:45.859313409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:42:45.862269 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:42:45.862762 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:39992.service: Deactivated successfully. Sep 9 05:42:45.865739 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:42:45.868732 containerd[1583]: time="2025-09-09T05:42:45.868163423Z" level=info msg="CreateContainer within sandbox \"bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 05:42:45.868628 systemd-logind[1510]: Removed session 10. Sep 9 05:42:45.915423 containerd[1583]: time="2025-09-09T05:42:45.915371735Z" level=info msg="Container 2477815762d285d668ec308c71b884ab3924e35861048171af77c3620627b260: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:45.925692 containerd[1583]: time="2025-09-09T05:42:45.925629480Z" level=info msg="CreateContainer within sandbox \"bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2477815762d285d668ec308c71b884ab3924e35861048171af77c3620627b260\"" Sep 9 05:42:45.926328 containerd[1583]: time="2025-09-09T05:42:45.926253893Z" level=info msg="StartContainer for \"2477815762d285d668ec308c71b884ab3924e35861048171af77c3620627b260\"" Sep 9 05:42:45.928368 containerd[1583]: time="2025-09-09T05:42:45.928332646Z" level=info msg="connecting to shim 2477815762d285d668ec308c71b884ab3924e35861048171af77c3620627b260" address="unix:///run/containerd/s/06c6500fe74ca0b53c2726fa07c82feb43b3cf0af7a7157c45aab90c9600e62a" protocol=ttrpc version=3 Sep 9 05:42:45.963155 systemd[1]: Started cri-containerd-2477815762d285d668ec308c71b884ab3924e35861048171af77c3620627b260.scope - libcontainer container 2477815762d285d668ec308c71b884ab3924e35861048171af77c3620627b260. Sep 9 05:42:46.011295 containerd[1583]: time="2025-09-09T05:42:46.011246114Z" level=info msg="StartContainer for \"2477815762d285d668ec308c71b884ab3924e35861048171af77c3620627b260\" returns successfully" Sep 9 05:42:48.886073 containerd[1583]: time="2025-09-09T05:42:48.885945224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:48.886667 containerd[1583]: time="2025-09-09T05:42:48.886627996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 05:42:48.887922 containerd[1583]: time="2025-09-09T05:42:48.887886918Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:48.890028 containerd[1583]: time="2025-09-09T05:42:48.889989916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:48.890643 containerd[1583]: time="2025-09-09T05:42:48.890609509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.031265914s" Sep 9 05:42:48.890705 containerd[1583]: time="2025-09-09T05:42:48.890644014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:42:48.891648 containerd[1583]: time="2025-09-09T05:42:48.891479502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:42:48.896262 containerd[1583]: time="2025-09-09T05:42:48.896226231Z" level=info msg="CreateContainer within sandbox \"9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:42:48.905208 containerd[1583]: time="2025-09-09T05:42:48.905176230Z" level=info msg="Container 86de8a8e3634e02156b97e132a8adbfa71924f4cbc885cdb434eb29171fe077e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:48.914057 containerd[1583]: time="2025-09-09T05:42:48.914029488Z" level=info msg="CreateContainer within sandbox \"9fab024f5d2a2a38700ce1564280b996b7fa70cf52e6adcda710a87567d68bda\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"86de8a8e3634e02156b97e132a8adbfa71924f4cbc885cdb434eb29171fe077e\"" Sep 9 05:42:48.914510 containerd[1583]: time="2025-09-09T05:42:48.914471547Z" level=info msg="StartContainer for \"86de8a8e3634e02156b97e132a8adbfa71924f4cbc885cdb434eb29171fe077e\"" Sep 9 05:42:48.915688 containerd[1583]: time="2025-09-09T05:42:48.915659628Z" level=info msg="connecting to shim 86de8a8e3634e02156b97e132a8adbfa71924f4cbc885cdb434eb29171fe077e" address="unix:///run/containerd/s/ada9d31e90268ab1d118f4eb792ebfb77c2b1670e4aa271f12f52426bdd06196" protocol=ttrpc version=3 Sep 9 05:42:48.975911 systemd[1]: Started cri-containerd-86de8a8e3634e02156b97e132a8adbfa71924f4cbc885cdb434eb29171fe077e.scope - libcontainer container 86de8a8e3634e02156b97e132a8adbfa71924f4cbc885cdb434eb29171fe077e. Sep 9 05:42:49.028749 containerd[1583]: time="2025-09-09T05:42:49.028693498Z" level=info msg="StartContainer for \"86de8a8e3634e02156b97e132a8adbfa71924f4cbc885cdb434eb29171fe077e\" returns successfully" Sep 9 05:42:49.539392 kubelet[2713]: I0909 05:42:49.539314 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-649ff699f7-l7lvk" podStartSLOduration=33.100812462 podStartE2EDuration="42.539299767s" podCreationTimestamp="2025-09-09 05:42:07 +0000 UTC" firstStartedPulling="2025-09-09 05:42:39.452803833 +0000 UTC m=+47.398449798" lastFinishedPulling="2025-09-09 05:42:48.891291138 +0000 UTC m=+56.836937103" observedRunningTime="2025-09-09 05:42:49.538813043 +0000 UTC m=+57.484458998" watchObservedRunningTime="2025-09-09 05:42:49.539299767 +0000 UTC m=+57.484945732" Sep 9 05:42:50.096769 containerd[1583]: time="2025-09-09T05:42:50.096329942Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:50.097878 containerd[1583]: time="2025-09-09T05:42:50.097836109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:42:50.100055 containerd[1583]: time="2025-09-09T05:42:50.100005300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.208495882s" Sep 9 05:42:50.100055 containerd[1583]: time="2025-09-09T05:42:50.100058941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:42:50.101207 containerd[1583]: time="2025-09-09T05:42:50.101167801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 05:42:50.107642 containerd[1583]: time="2025-09-09T05:42:50.107599674Z" level=info msg="CreateContainer within sandbox \"941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:42:50.117097 containerd[1583]: time="2025-09-09T05:42:50.117017078Z" level=info msg="Container cd90ccab38d9a11345ba20a5b97d5253c3c56d652e693517ff6d65527db76258: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:50.133662 containerd[1583]: time="2025-09-09T05:42:50.133604560Z" level=info msg="CreateContainer within sandbox \"941279f8e09b3cc2361134c80dc2eddad72e57efe004f00626b3712832658344\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cd90ccab38d9a11345ba20a5b97d5253c3c56d652e693517ff6d65527db76258\"" Sep 9 05:42:50.134179 containerd[1583]: time="2025-09-09T05:42:50.134140776Z" level=info msg="StartContainer for \"cd90ccab38d9a11345ba20a5b97d5253c3c56d652e693517ff6d65527db76258\"" Sep 9 05:42:50.135692 containerd[1583]: time="2025-09-09T05:42:50.135656712Z" level=info msg="connecting to shim cd90ccab38d9a11345ba20a5b97d5253c3c56d652e693517ff6d65527db76258" address="unix:///run/containerd/s/a7c8f6cfd146003ebbe879bfa2225a73aec8478ef708528ae538f9d58c00bba8" protocol=ttrpc version=3 Sep 9 05:42:50.168930 systemd[1]: Started cri-containerd-cd90ccab38d9a11345ba20a5b97d5253c3c56d652e693517ff6d65527db76258.scope - libcontainer container cd90ccab38d9a11345ba20a5b97d5253c3c56d652e693517ff6d65527db76258. Sep 9 05:42:50.399289 containerd[1583]: time="2025-09-09T05:42:50.399163278Z" level=info msg="StartContainer for \"cd90ccab38d9a11345ba20a5b97d5253c3c56d652e693517ff6d65527db76258\" returns successfully" Sep 9 05:42:50.403521 kubelet[2713]: I0909 05:42:50.403105 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:42:50.875832 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:36038.service - OpenSSH per-connection server daemon (10.0.0.1:36038). Sep 9 05:42:50.942852 sshd[5128]: Accepted publickey for core from 10.0.0.1 port 36038 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:42:50.945132 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:42:50.951275 systemd-logind[1510]: New session 11 of user core. Sep 9 05:42:50.961904 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:42:51.096143 sshd[5133]: Connection closed by 10.0.0.1 port 36038 Sep 9 05:42:51.096517 sshd-session[5128]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:51.106350 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:36038.service: Deactivated successfully. Sep 9 05:42:51.108913 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:42:51.109950 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:42:51.113979 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:36050.service - OpenSSH per-connection server daemon (10.0.0.1:36050). Sep 9 05:42:51.114791 systemd-logind[1510]: Removed session 11. Sep 9 05:42:51.174209 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 36050 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:42:51.175970 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:42:51.180794 systemd-logind[1510]: New session 12 of user core. Sep 9 05:42:51.190013 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:42:51.344887 sshd[5150]: Connection closed by 10.0.0.1 port 36050 Sep 9 05:42:51.346185 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:51.357911 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:36050.service: Deactivated successfully. Sep 9 05:42:51.360628 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:42:51.363838 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:42:51.368838 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:36052.service - OpenSSH per-connection server daemon (10.0.0.1:36052). Sep 9 05:42:51.369863 systemd-logind[1510]: Removed session 12. Sep 9 05:42:51.415940 kubelet[2713]: I0909 05:42:51.415104 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-649ff699f7-lr4gx" podStartSLOduration=35.055564293 podStartE2EDuration="44.415087976s" podCreationTimestamp="2025-09-09 05:42:07 +0000 UTC" firstStartedPulling="2025-09-09 05:42:40.741429255 +0000 UTC m=+48.687075220" lastFinishedPulling="2025-09-09 05:42:50.100952938 +0000 UTC m=+58.046598903" observedRunningTime="2025-09-09 05:42:51.414882581 +0000 UTC m=+59.360528536" watchObservedRunningTime="2025-09-09 05:42:51.415087976 +0000 UTC m=+59.360733941" Sep 9 05:42:51.421089 sshd[5162]: Accepted publickey for core from 10.0.0.1 port 36052 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:42:51.422791 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:42:51.427588 systemd-logind[1510]: New session 13 of user core. Sep 9 05:42:51.435934 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:42:51.551785 sshd[5167]: Connection closed by 10.0.0.1 port 36052 Sep 9 05:42:51.552137 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:51.558277 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:36052.service: Deactivated successfully. Sep 9 05:42:51.560532 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:42:51.561357 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:42:51.562504 systemd-logind[1510]: Removed session 13. Sep 9 05:42:52.406258 kubelet[2713]: I0909 05:42:52.406209 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:42:53.614086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291715857.mount: Deactivated successfully. Sep 9 05:42:54.685297 containerd[1583]: time="2025-09-09T05:42:54.685232466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:54.686992 containerd[1583]: time="2025-09-09T05:42:54.686964266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 05:42:54.691901 containerd[1583]: time="2025-09-09T05:42:54.691817734Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:54.694180 containerd[1583]: time="2025-09-09T05:42:54.694110396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:54.694825 containerd[1583]: time="2025-09-09T05:42:54.694790132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.593587886s" Sep 9 05:42:54.694875 containerd[1583]: time="2025-09-09T05:42:54.694825338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 05:42:54.696026 containerd[1583]: time="2025-09-09T05:42:54.695813442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 05:42:54.702461 containerd[1583]: time="2025-09-09T05:42:54.702415782Z" level=info msg="CreateContainer within sandbox \"49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 05:42:54.713342 containerd[1583]: time="2025-09-09T05:42:54.713281333Z" level=info msg="Container 1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:54.741673 containerd[1583]: time="2025-09-09T05:42:54.741594734Z" level=info msg="CreateContainer within sandbox \"49d0cf53efb9bc08bf1132add46079acc8c87d9dab5f6ba0329fce67d9911401\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816\"" Sep 9 05:42:54.742273 containerd[1583]: time="2025-09-09T05:42:54.742234866Z" level=info msg="StartContainer for \"1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816\"" Sep 9 05:42:54.743445 containerd[1583]: time="2025-09-09T05:42:54.743415532Z" level=info msg="connecting to shim 1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816" address="unix:///run/containerd/s/e3ae8ee85afc1ad2074af49af8eb887ec8a5db30cdf4bcc8671caef2ba4cdbe5" protocol=ttrpc version=3 Sep 9 05:42:54.784015 systemd[1]: Started cri-containerd-1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816.scope - libcontainer container 1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816. Sep 9 05:42:54.841221 containerd[1583]: time="2025-09-09T05:42:54.841154476Z" level=info msg="StartContainer for \"1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816\" returns successfully" Sep 9 05:42:55.433920 kubelet[2713]: I0909 05:42:55.433837 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-6vtv8" podStartSLOduration=32.688023442 podStartE2EDuration="46.433816629s" podCreationTimestamp="2025-09-09 05:42:09 +0000 UTC" firstStartedPulling="2025-09-09 05:42:40.949856428 +0000 UTC m=+48.895502403" lastFinishedPulling="2025-09-09 05:42:54.695649625 +0000 UTC m=+62.641295590" observedRunningTime="2025-09-09 05:42:55.432095388 +0000 UTC m=+63.377741353" watchObservedRunningTime="2025-09-09 05:42:55.433816629 +0000 UTC m=+63.379462594" Sep 9 05:42:55.554080 containerd[1583]: time="2025-09-09T05:42:55.554033087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816\" id:\"4e17340d495d0182fdb914a94ce62f671e198ae77ae4e50da76b17ce034378d3\" pid:5245 exit_status:1 exited_at:{seconds:1757396575 nanos:553592922}" Sep 9 05:42:56.504455 containerd[1583]: time="2025-09-09T05:42:56.504312721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816\" id:\"8b7418d7aa167f8dba613ca3c47abd5bc1807870e88f877541752023328491b6\" pid:5268 exit_status:1 exited_at:{seconds:1757396576 nanos:504065392}" Sep 9 05:42:56.569598 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:36064.service - OpenSSH per-connection server daemon (10.0.0.1:36064). Sep 9 05:42:56.644751 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 36064 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:42:56.646760 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:42:56.651428 systemd-logind[1510]: New session 14 of user core. Sep 9 05:42:56.655877 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:42:56.780069 sshd[5285]: Connection closed by 10.0.0.1 port 36064 Sep 9 05:42:56.780545 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Sep 9 05:42:56.784988 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:36064.service: Deactivated successfully. Sep 9 05:42:56.787094 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:42:56.787935 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:42:56.789113 systemd-logind[1510]: Removed session 14. Sep 9 05:42:57.829866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375382702.mount: Deactivated successfully. Sep 9 05:42:57.903348 containerd[1583]: time="2025-09-09T05:42:57.903275734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:57.904782 containerd[1583]: time="2025-09-09T05:42:57.904581431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 05:42:57.907363 containerd[1583]: time="2025-09-09T05:42:57.907312557Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:57.911509 containerd[1583]: time="2025-09-09T05:42:57.911435946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:57.912244 containerd[1583]: time="2025-09-09T05:42:57.912193793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.216349703s" Sep 9 05:42:57.912244 containerd[1583]: time="2025-09-09T05:42:57.912229963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 05:42:57.913333 containerd[1583]: time="2025-09-09T05:42:57.913285876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 05:42:57.918983 containerd[1583]: time="2025-09-09T05:42:57.918928295Z" level=info msg="CreateContainer within sandbox \"bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 05:42:57.931307 containerd[1583]: time="2025-09-09T05:42:57.930683123Z" level=info msg="Container 1cd6f32a2873fc1feb89fe3bfc61de8997cd545a26131de435b2f9ea81c83e5a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:57.944843 containerd[1583]: time="2025-09-09T05:42:57.944766386Z" level=info msg="CreateContainer within sandbox \"bd4a1d378ff8544cb9535ff5a8339d834514cc295b7ced89fb52e4f55158953e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1cd6f32a2873fc1feb89fe3bfc61de8997cd545a26131de435b2f9ea81c83e5a\"" Sep 9 05:42:57.945428 containerd[1583]: time="2025-09-09T05:42:57.945360846Z" level=info msg="StartContainer for \"1cd6f32a2873fc1feb89fe3bfc61de8997cd545a26131de435b2f9ea81c83e5a\"" Sep 9 05:42:57.946624 containerd[1583]: time="2025-09-09T05:42:57.946577622Z" level=info msg="connecting to shim 1cd6f32a2873fc1feb89fe3bfc61de8997cd545a26131de435b2f9ea81c83e5a" address="unix:///run/containerd/s/770424b5b469b5d54a5d0f21cbbc1762f435fca8bfedb10dc9ed84affedad6cc" protocol=ttrpc version=3 Sep 9 05:42:57.987243 systemd[1]: Started cri-containerd-1cd6f32a2873fc1feb89fe3bfc61de8997cd545a26131de435b2f9ea81c83e5a.scope - libcontainer container 1cd6f32a2873fc1feb89fe3bfc61de8997cd545a26131de435b2f9ea81c83e5a. Sep 9 05:42:58.054444 containerd[1583]: time="2025-09-09T05:42:58.054325837Z" level=info msg="StartContainer for \"1cd6f32a2873fc1feb89fe3bfc61de8997cd545a26131de435b2f9ea81c83e5a\" returns successfully" Sep 9 05:42:59.579044 containerd[1583]: time="2025-09-09T05:42:59.578949633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:59.581042 containerd[1583]: time="2025-09-09T05:42:59.581000245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 05:42:59.582860 containerd[1583]: time="2025-09-09T05:42:59.582815802Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:59.586490 containerd[1583]: time="2025-09-09T05:42:59.586372043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:42:59.587380 containerd[1583]: time="2025-09-09T05:42:59.587323211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.674005282s" Sep 9 05:42:59.587380 containerd[1583]: time="2025-09-09T05:42:59.587377826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 05:42:59.595424 containerd[1583]: time="2025-09-09T05:42:59.595362713Z" level=info msg="CreateContainer within sandbox \"bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 05:42:59.608733 containerd[1583]: time="2025-09-09T05:42:59.608644071Z" level=info msg="Container 936d35c8c960a720ce2adc6785a6fec6adc6762b56ebcfcfc4d1962ab48024e1: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:42:59.623475 containerd[1583]: time="2025-09-09T05:42:59.623424045Z" level=info msg="CreateContainer within sandbox \"bb795cc0188fe00b928ca056ba5c448a69333525a09f2a9c4658c3a44e302eff\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"936d35c8c960a720ce2adc6785a6fec6adc6762b56ebcfcfc4d1962ab48024e1\"" Sep 9 05:42:59.624000 containerd[1583]: time="2025-09-09T05:42:59.623959118Z" level=info msg="StartContainer for \"936d35c8c960a720ce2adc6785a6fec6adc6762b56ebcfcfc4d1962ab48024e1\"" Sep 9 05:42:59.625508 containerd[1583]: time="2025-09-09T05:42:59.625410853Z" level=info msg="connecting to shim 936d35c8c960a720ce2adc6785a6fec6adc6762b56ebcfcfc4d1962ab48024e1" address="unix:///run/containerd/s/06c6500fe74ca0b53c2726fa07c82feb43b3cf0af7a7157c45aab90c9600e62a" protocol=ttrpc version=3 Sep 9 05:42:59.654213 systemd[1]: Started cri-containerd-936d35c8c960a720ce2adc6785a6fec6adc6762b56ebcfcfc4d1962ab48024e1.scope - libcontainer container 936d35c8c960a720ce2adc6785a6fec6adc6762b56ebcfcfc4d1962ab48024e1. Sep 9 05:42:59.855846 containerd[1583]: time="2025-09-09T05:42:59.855623159Z" level=info msg="StartContainer for \"936d35c8c960a720ce2adc6785a6fec6adc6762b56ebcfcfc4d1962ab48024e1\" returns successfully" Sep 9 05:43:00.237517 kubelet[2713]: I0909 05:43:00.237484 2713 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 05:43:00.239201 kubelet[2713]: I0909 05:43:00.239172 2713 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 05:43:00.447342 kubelet[2713]: I0909 05:43:00.447273 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l2qzh" podStartSLOduration=30.33551396 podStartE2EDuration="51.447254856s" podCreationTimestamp="2025-09-09 05:42:09 +0000 UTC" firstStartedPulling="2025-09-09 05:42:38.476572498 +0000 UTC m=+46.422218453" lastFinishedPulling="2025-09-09 05:42:59.588313384 +0000 UTC m=+67.533959349" observedRunningTime="2025-09-09 05:43:00.445790329 +0000 UTC m=+68.391436314" watchObservedRunningTime="2025-09-09 05:43:00.447254856 +0000 UTC m=+68.392900821" Sep 9 05:43:00.447900 kubelet[2713]: I0909 05:43:00.447389 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5df97645dd-m42qp" podStartSLOduration=5.63024116 podStartE2EDuration="25.447384256s" podCreationTimestamp="2025-09-09 05:42:35 +0000 UTC" firstStartedPulling="2025-09-09 05:42:38.095966639 +0000 UTC m=+46.041612604" lastFinishedPulling="2025-09-09 05:42:57.913109725 +0000 UTC m=+65.858755700" observedRunningTime="2025-09-09 05:42:58.44025898 +0000 UTC m=+66.385904945" watchObservedRunningTime="2025-09-09 05:43:00.447384256 +0000 UTC m=+68.393030221" Sep 9 05:43:01.651882 kubelet[2713]: I0909 05:43:01.651768 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:43:01.797521 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:52676.service - OpenSSH per-connection server daemon (10.0.0.1:52676). Sep 9 05:43:01.903622 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 52676 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:01.905208 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:01.909730 systemd-logind[1510]: New session 15 of user core. Sep 9 05:43:01.924925 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:43:02.093029 sshd[5400]: Connection closed by 10.0.0.1 port 52676 Sep 9 05:43:02.093463 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:02.098476 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:52676.service: Deactivated successfully. Sep 9 05:43:02.101306 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:43:02.102409 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:43:02.104403 systemd-logind[1510]: Removed session 15. Sep 9 05:43:05.378102 containerd[1583]: time="2025-09-09T05:43:05.378051153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb\" id:\"a365e3f52e4a34d5c830aa094878604c321fb7970fbe7ce489604d40f3036d6d\" pid:5424 exited_at:{seconds:1757396585 nanos:377704677}" Sep 9 05:43:05.464992 containerd[1583]: time="2025-09-09T05:43:05.464938780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed0447416a0f149b85deeab0dc90182f4e5b773c6621c3e3548be69e20beb7eb\" id:\"28a3ab99a8607e13a925589d00ce3df41d6778d69d1181ee75ca6e0720e471f7\" pid:5449 exited_at:{seconds:1757396585 nanos:464584879}" Sep 9 05:43:07.109779 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:52690.service - OpenSSH per-connection server daemon (10.0.0.1:52690). Sep 9 05:43:07.151548 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 52690 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:07.153392 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:07.157869 systemd-logind[1510]: New session 16 of user core. Sep 9 05:43:07.171933 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:43:07.299202 sshd[5465]: Connection closed by 10.0.0.1 port 52690 Sep 9 05:43:07.299527 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:07.305011 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:52690.service: Deactivated successfully. Sep 9 05:43:07.307347 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:43:07.308433 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:43:07.310257 systemd-logind[1510]: Removed session 16. Sep 9 05:43:11.145260 kubelet[2713]: E0909 05:43:11.145202 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:43:11.654014 containerd[1583]: time="2025-09-09T05:43:11.653950984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816\" id:\"222e6fc7b37d148ba5fc5ca32a3a769c2e63dd60b915f9148fb76b964b98a87c\" pid:5492 exited_at:{seconds:1757396591 nanos:653358859}" Sep 9 05:43:12.312659 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:50428.service - OpenSSH per-connection server daemon (10.0.0.1:50428). Sep 9 05:43:12.382137 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 50428 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:12.383652 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:12.390486 systemd-logind[1510]: New session 17 of user core. Sep 9 05:43:12.400847 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:43:12.730770 sshd[5507]: Connection closed by 10.0.0.1 port 50428 Sep 9 05:43:12.731182 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:12.741672 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:50428.service: Deactivated successfully. Sep 9 05:43:12.744871 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:43:12.746078 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:43:12.748239 systemd-logind[1510]: Removed session 17. Sep 9 05:43:12.750824 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:50432.service - OpenSSH per-connection server daemon (10.0.0.1:50432). Sep 9 05:43:12.805358 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 50432 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:12.808028 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:12.814050 systemd-logind[1510]: New session 18 of user core. Sep 9 05:43:12.819969 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:43:13.286261 sshd[5526]: Connection closed by 10.0.0.1 port 50432 Sep 9 05:43:13.286782 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:13.304097 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:50432.service: Deactivated successfully. Sep 9 05:43:13.306180 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:43:13.306937 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:43:13.310400 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:50446.service - OpenSSH per-connection server daemon (10.0.0.1:50446). Sep 9 05:43:13.311629 systemd-logind[1510]: Removed session 18. Sep 9 05:43:13.376726 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 50446 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:13.378452 sshd-session[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:13.383346 systemd-logind[1510]: New session 19 of user core. Sep 9 05:43:13.394837 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:43:14.257746 sshd[5541]: Connection closed by 10.0.0.1 port 50446 Sep 9 05:43:14.259144 sshd-session[5538]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:14.269439 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:50446.service: Deactivated successfully. Sep 9 05:43:14.271915 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:43:14.273443 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:43:14.276550 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:50450.service - OpenSSH per-connection server daemon (10.0.0.1:50450). Sep 9 05:43:14.278321 systemd-logind[1510]: Removed session 19. Sep 9 05:43:14.330784 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 50450 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:14.332669 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:14.337662 systemd-logind[1510]: New session 20 of user core. Sep 9 05:43:14.349982 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:43:14.394680 containerd[1583]: time="2025-09-09T05:43:14.394628905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6\" id:\"09971fb24b5304728c6751886a865a87b6d5d5edf6e02737d3269cb920ed22f6\" pid:5578 exited_at:{seconds:1757396594 nanos:394275409}" Sep 9 05:43:14.638775 sshd[5572]: Connection closed by 10.0.0.1 port 50450 Sep 9 05:43:14.639962 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:14.651619 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:50450.service: Deactivated successfully. Sep 9 05:43:14.654417 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:43:14.657110 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:43:14.660566 systemd-logind[1510]: Removed session 20. Sep 9 05:43:14.665030 systemd[1]: Started sshd@20-10.0.0.118:22-10.0.0.1:50460.service - OpenSSH per-connection server daemon (10.0.0.1:50460). Sep 9 05:43:14.720877 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 50460 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:14.722980 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:14.728820 systemd-logind[1510]: New session 21 of user core. Sep 9 05:43:14.744174 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:43:14.859262 sshd[5601]: Connection closed by 10.0.0.1 port 50460 Sep 9 05:43:14.859700 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:14.865640 systemd[1]: sshd@20-10.0.0.118:22-10.0.0.1:50460.service: Deactivated successfully. Sep 9 05:43:14.868453 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:43:14.869864 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:43:14.871528 systemd-logind[1510]: Removed session 21. Sep 9 05:43:18.177815 kubelet[2713]: I0909 05:43:18.177733 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:43:19.877012 systemd[1]: Started sshd@21-10.0.0.118:22-10.0.0.1:50466.service - OpenSSH per-connection server daemon (10.0.0.1:50466). Sep 9 05:43:19.940678 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 50466 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:19.942105 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:19.946973 systemd-logind[1510]: New session 22 of user core. Sep 9 05:43:19.961836 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:43:20.396412 sshd[5627]: Connection closed by 10.0.0.1 port 50466 Sep 9 05:43:20.396804 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:20.401545 systemd[1]: sshd@21-10.0.0.118:22-10.0.0.1:50466.service: Deactivated successfully. Sep 9 05:43:20.403989 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:43:20.405085 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:43:20.406258 systemd-logind[1510]: Removed session 22. Sep 9 05:43:25.421991 systemd[1]: Started sshd@22-10.0.0.118:22-10.0.0.1:57240.service - OpenSSH per-connection server daemon (10.0.0.1:57240). Sep 9 05:43:25.485607 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 57240 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:25.487826 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:25.492187 systemd-logind[1510]: New session 23 of user core. Sep 9 05:43:25.505997 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:43:25.646289 sshd[5647]: Connection closed by 10.0.0.1 port 57240 Sep 9 05:43:25.646695 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:25.653987 systemd[1]: sshd@22-10.0.0.118:22-10.0.0.1:57240.service: Deactivated successfully. Sep 9 05:43:25.656055 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:43:25.656799 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:43:25.657960 systemd-logind[1510]: Removed session 23. Sep 9 05:43:26.145078 kubelet[2713]: E0909 05:43:26.145012 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:43:26.510416 containerd[1583]: time="2025-09-09T05:43:26.510355107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1374c892b9d8156c51a56ffc1ac895eeb9ea6a5a3e7f19d4d36f01e45dbfc816\" id:\"44304af98e49483cbad7a818a11bb7613fa43be1a91936dac98f70fd7ad6687c\" pid:5671 exited_at:{seconds:1757396606 nanos:510012906}" Sep 9 05:43:27.144949 kubelet[2713]: E0909 05:43:27.144901 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:43:28.832639 containerd[1583]: time="2025-09-09T05:43:28.832587822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b813b27c39a94520f1091563aafffc8376822553ae4d6bfd1e36525096f4df6\" id:\"74ba78512aeb3394b2aea5a58fdb8dab053185df30034831056d825be83f205b\" pid:5698 exited_at:{seconds:1757396608 nanos:832340983}" Sep 9 05:43:30.144685 kubelet[2713]: E0909 05:43:30.144637 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:43:30.658841 systemd[1]: Started sshd@23-10.0.0.118:22-10.0.0.1:54578.service - OpenSSH per-connection server daemon (10.0.0.1:54578). Sep 9 05:43:30.712884 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 54578 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:43:30.715047 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:43:30.719855 systemd-logind[1510]: New session 24 of user core. Sep 9 05:43:30.725852 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:43:30.886764 sshd[5712]: Connection closed by 10.0.0.1 port 54578 Sep 9 05:43:30.887130 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Sep 9 05:43:30.891592 systemd[1]: sshd@23-10.0.0.118:22-10.0.0.1:54578.service: Deactivated successfully. Sep 9 05:43:30.893953 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:43:30.894823 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:43:30.896418 systemd-logind[1510]: Removed session 24.