Sep 12 23:05:00.892985 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 12 23:05:00.893014 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:05:00.893038 kernel: BIOS-provided physical RAM map: Sep 12 23:05:00.893048 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 23:05:00.893057 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 23:05:00.893066 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 23:05:00.893076 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 23:05:00.893094 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 12 23:05:00.893116 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 23:05:00.893126 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 23:05:00.893135 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 23:05:00.893147 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 23:05:00.893162 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 23:05:00.893171 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 23:05:00.893183 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 23:05:00.893192 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 23:05:00.893208 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 23:05:00.893218 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 23:05:00.893228 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 23:05:00.893238 kernel: NX (Execute Disable) protection: active Sep 12 23:05:00.893247 kernel: APIC: Static calls initialized Sep 12 23:05:00.893257 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Sep 12 23:05:00.893267 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Sep 12 23:05:00.893277 kernel: extended physical RAM map: Sep 12 23:05:00.893287 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 23:05:00.893297 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 23:05:00.893307 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 23:05:00.893319 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 23:05:00.893329 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Sep 12 23:05:00.893338 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Sep 12 23:05:00.893348 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Sep 12 23:05:00.893358 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Sep 12 23:05:00.893367 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Sep 12 23:05:00.893377 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 23:05:00.893387 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 23:05:00.893397 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 23:05:00.893406 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 23:05:00.893416 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 23:05:00.893428 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 23:05:00.893438 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 23:05:00.893460 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 23:05:00.893471 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 23:05:00.893481 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 23:05:00.893491 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 23:05:00.893504 kernel: efi: EFI v2.7 by EDK II Sep 12 23:05:00.893514 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 12 23:05:00.893524 kernel: random: crng init done Sep 12 23:05:00.893535 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 23:05:00.893544 kernel: secureboot: Secure boot enabled Sep 12 23:05:00.893555 kernel: SMBIOS 2.8 present. Sep 12 23:05:00.893565 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 23:05:00.893575 kernel: DMI: Memory slots populated: 1/1 Sep 12 23:05:00.893617 kernel: Hypervisor detected: KVM Sep 12 23:05:00.893629 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 23:05:00.893641 kernel: kvm-clock: using sched offset of 5901936630 cycles Sep 12 23:05:00.893657 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 23:05:00.893668 kernel: tsc: Detected 2794.748 MHz processor Sep 12 23:05:00.893679 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 23:05:00.893689 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 23:05:00.893699 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 12 23:05:00.893710 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 23:05:00.893721 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 23:05:00.893736 kernel: Using GB pages for direct mapping Sep 12 23:05:00.893746 kernel: ACPI: Early table checksum verification disabled Sep 12 23:05:00.893770 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 12 23:05:00.893780 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:05:00.893792 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:05:00.893803 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:05:00.893813 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 12 23:05:00.893824 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:05:00.893834 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:05:00.893845 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:05:00.893856 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:05:00.893869 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 23:05:00.893880 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 12 23:05:00.893890 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 12 23:05:00.893901 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 12 23:05:00.893912 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 12 23:05:00.893923 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 12 23:05:00.893933 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 12 23:05:00.893944 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 12 23:05:00.893954 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 12 23:05:00.893968 kernel: No NUMA configuration found Sep 12 23:05:00.893978 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 12 23:05:00.893989 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 12 23:05:00.894000 kernel: Zone ranges: Sep 12 23:05:00.894010 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 23:05:00.894021 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 12 23:05:00.894031 kernel: Normal empty Sep 12 23:05:00.894042 kernel: Device empty Sep 12 23:05:00.894053 kernel: Movable zone start for each node Sep 12 23:05:00.894066 kernel: Early memory node ranges Sep 12 23:05:00.894077 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 12 23:05:00.894088 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 12 23:05:00.894098 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 12 23:05:00.894109 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 12 23:05:00.894119 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 12 23:05:00.894130 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 12 23:05:00.894140 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 23:05:00.894151 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 12 23:05:00.894164 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 23:05:00.894175 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 23:05:00.894185 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 23:05:00.894196 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 12 23:05:00.894207 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 23:05:00.894217 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 23:05:00.894228 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 23:05:00.894248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 23:05:00.894266 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 23:05:00.894298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 23:05:00.894313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 23:05:00.894324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 23:05:00.894334 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 23:05:00.894345 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 23:05:00.894356 kernel: TSC deadline timer available Sep 12 23:05:00.894378 kernel: CPU topo: Max. logical packages: 1 Sep 12 23:05:00.894390 kernel: CPU topo: Max. logical dies: 1 Sep 12 23:05:00.894400 kernel: CPU topo: Max. dies per package: 1 Sep 12 23:05:00.894434 kernel: CPU topo: Max. threads per core: 1 Sep 12 23:05:00.894453 kernel: CPU topo: Num. cores per package: 4 Sep 12 23:05:00.894464 kernel: CPU topo: Num. threads per package: 4 Sep 12 23:05:00.894477 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 23:05:00.894509 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 23:05:00.894521 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 23:05:00.894532 kernel: kvm-guest: setup PV sched yield Sep 12 23:05:00.894544 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 23:05:00.894560 kernel: Booting paravirtualized kernel on KVM Sep 12 23:05:00.894571 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 23:05:00.894604 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 23:05:00.894617 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 23:05:00.894629 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 23:05:00.894639 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 23:05:00.894650 kernel: kvm-guest: PV spinlocks enabled Sep 12 23:05:00.894661 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 23:05:00.894674 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:05:00.894689 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:05:00.894700 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:05:00.894711 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:05:00.894722 kernel: Fallback order for Node 0: 0 Sep 12 23:05:00.894733 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 12 23:05:00.894744 kernel: Policy zone: DMA32 Sep 12 23:05:00.894755 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:05:00.894775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:05:00.894788 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 23:05:00.894800 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 23:05:00.894820 kernel: Dynamic Preempt: voluntary Sep 12 23:05:00.894840 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:05:00.894853 kernel: rcu: RCU event tracing is enabled. Sep 12 23:05:00.894865 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:05:00.894876 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:05:00.894887 kernel: Rude variant of Tasks RCU enabled. Sep 12 23:05:00.894898 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:05:00.894908 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:05:00.894923 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:05:00.894934 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:05:00.894945 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:05:00.894956 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:05:00.894967 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 23:05:00.894978 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:05:00.894989 kernel: Console: colour dummy device 80x25 Sep 12 23:05:00.895000 kernel: printk: legacy console [ttyS0] enabled Sep 12 23:05:00.895010 kernel: ACPI: Core revision 20240827 Sep 12 23:05:00.895024 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 23:05:00.895035 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 23:05:00.895046 kernel: x2apic enabled Sep 12 23:05:00.895058 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 23:05:00.895069 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 23:05:00.895080 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 23:05:00.895091 kernel: kvm-guest: setup PV IPIs Sep 12 23:05:00.895102 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 23:05:00.895114 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 23:05:00.895127 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 23:05:00.895139 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 23:05:00.895150 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 23:05:00.895161 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 23:05:00.895183 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 23:05:00.895195 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 23:05:00.895214 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 23:05:00.895226 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 23:05:00.895240 kernel: active return thunk: retbleed_return_thunk Sep 12 23:05:00.895251 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 23:05:00.895262 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 23:05:00.895273 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 23:05:00.895284 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 23:05:00.895297 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 23:05:00.895308 kernel: active return thunk: srso_return_thunk Sep 12 23:05:00.895319 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 23:05:00.895331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 23:05:00.895344 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 23:05:00.895356 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 23:05:00.895367 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 23:05:00.895378 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 23:05:00.895389 kernel: Freeing SMP alternatives memory: 32K Sep 12 23:05:00.895400 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:05:00.895414 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 23:05:00.895425 kernel: landlock: Up and running. Sep 12 23:05:00.895436 kernel: SELinux: Initializing. Sep 12 23:05:00.895450 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:05:00.895461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:05:00.895472 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 23:05:00.895483 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 23:05:00.895494 kernel: ... version: 0 Sep 12 23:05:00.895505 kernel: ... bit width: 48 Sep 12 23:05:00.895517 kernel: ... generic registers: 6 Sep 12 23:05:00.895528 kernel: ... value mask: 0000ffffffffffff Sep 12 23:05:00.895538 kernel: ... max period: 00007fffffffffff Sep 12 23:05:00.895551 kernel: ... fixed-purpose events: 0 Sep 12 23:05:00.895562 kernel: ... event mask: 000000000000003f Sep 12 23:05:00.895573 kernel: signal: max sigframe size: 1776 Sep 12 23:05:00.895598 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:05:00.895612 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:05:00.895624 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 23:05:00.895638 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:05:00.895658 kernel: smpboot: x86: Booting SMP configuration: Sep 12 23:05:00.895674 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 23:05:00.895689 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:05:00.895700 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 23:05:00.895712 kernel: Memory: 2409224K/2552216K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 137064K reserved, 0K cma-reserved) Sep 12 23:05:00.895723 kernel: devtmpfs: initialized Sep 12 23:05:00.895735 kernel: x86/mm: Memory block size: 128MB Sep 12 23:05:00.895746 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 12 23:05:00.895767 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 12 23:05:00.895778 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:05:00.895789 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:05:00.895803 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:05:00.895814 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:05:00.895825 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:05:00.895836 kernel: audit: type=2000 audit(1757718297.954:1): state=initialized audit_enabled=0 res=1 Sep 12 23:05:00.895848 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:05:00.895859 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 23:05:00.895870 kernel: cpuidle: using governor menu Sep 12 23:05:00.895881 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:05:00.895892 kernel: dca service started, version 1.12.1 Sep 12 23:05:00.895906 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 23:05:00.895917 kernel: PCI: Using configuration type 1 for base access Sep 12 23:05:00.895928 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 23:05:00.895939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:05:00.895950 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:05:00.895961 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:05:00.895972 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:05:00.895983 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:05:00.895997 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:05:00.896008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:05:00.896019 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:05:00.896030 kernel: ACPI: Interpreter enabled Sep 12 23:05:00.896040 kernel: ACPI: PM: (supports S0 S5) Sep 12 23:05:00.896051 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 23:05:00.896063 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 23:05:00.896074 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 23:05:00.896085 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 23:05:00.896096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:05:00.896495 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:05:00.896794 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 23:05:00.896981 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 23:05:00.896997 kernel: PCI host bridge to bus 0000:00 Sep 12 23:05:00.897152 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 23:05:00.897298 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 23:05:00.897450 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 23:05:00.897673 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 23:05:00.897828 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 23:05:00.897983 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 23:05:00.898120 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:05:00.898294 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 23:05:00.898462 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 23:05:00.898646 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 23:05:00.898813 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 23:05:00.898961 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 23:05:00.899108 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 23:05:00.899268 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 23:05:00.899417 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 23:05:00.899573 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 23:05:00.899769 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 23:05:00.899931 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 23:05:00.900081 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 23:05:00.900230 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 23:05:00.900377 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 23:05:00.900538 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 23:05:00.900739 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 23:05:00.900902 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 23:05:00.901054 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 23:05:00.901202 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 23:05:00.901360 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 23:05:00.901508 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 23:05:00.901693 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 23:05:00.901860 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 23:05:00.902008 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 23:05:00.902166 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 23:05:00.902315 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 23:05:00.902330 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 23:05:00.902342 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 23:05:00.902354 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 23:05:00.902369 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 23:05:00.902380 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 23:05:00.902391 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 23:05:00.902402 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 23:05:00.902413 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 23:05:00.902425 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 23:05:00.902436 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 23:05:00.902447 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 23:05:00.902458 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 23:05:00.902471 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 23:05:00.902483 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 23:05:00.902494 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 23:05:00.902505 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 23:05:00.902517 kernel: iommu: Default domain type: Translated Sep 12 23:05:00.902528 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 23:05:00.902539 kernel: efivars: Registered efivars operations Sep 12 23:05:00.902550 kernel: PCI: Using ACPI for IRQ routing Sep 12 23:05:00.902561 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 23:05:00.902575 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 12 23:05:00.902602 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Sep 12 23:05:00.902613 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Sep 12 23:05:00.902633 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 12 23:05:00.902646 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 12 23:05:00.902827 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 23:05:00.902980 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 23:05:00.903128 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 23:05:00.903148 kernel: vgaarb: loaded Sep 12 23:05:00.903160 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 23:05:00.903172 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 23:05:00.903183 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 23:05:00.903194 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:05:00.903206 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:05:00.903217 kernel: pnp: PnP ACPI init Sep 12 23:05:00.903391 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 23:05:00.903410 kernel: pnp: PnP ACPI: found 6 devices Sep 12 23:05:00.903437 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 23:05:00.903449 kernel: NET: Registered PF_INET protocol family Sep 12 23:05:00.903461 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:05:00.903475 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:05:00.903486 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:05:00.903497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:05:00.903508 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:05:00.903519 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:05:00.903534 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:05:00.903546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:05:00.903558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:05:00.903569 kernel: NET: Registered PF_XDP protocol family Sep 12 23:05:00.903776 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 23:05:00.903940 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 23:05:00.904091 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 23:05:00.904275 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 23:05:00.904418 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 23:05:00.904564 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 23:05:00.904733 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 23:05:00.904884 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 23:05:00.904900 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:05:00.904911 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 23:05:00.904923 kernel: Initialise system trusted keyrings Sep 12 23:05:00.904934 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:05:00.904945 kernel: Key type asymmetric registered Sep 12 23:05:00.904961 kernel: Asymmetric key parser 'x509' registered Sep 12 23:05:00.904991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:05:00.905006 kernel: io scheduler mq-deadline registered Sep 12 23:05:00.905017 kernel: io scheduler kyber registered Sep 12 23:05:00.905029 kernel: io scheduler bfq registered Sep 12 23:05:00.905040 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 23:05:00.905052 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 23:05:00.905064 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 23:05:00.905076 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 23:05:00.905090 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:05:00.905102 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 23:05:00.905118 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 23:05:00.905129 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 23:05:00.905141 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 23:05:00.905153 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 23:05:00.905337 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 23:05:00.905488 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 23:05:00.905684 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T23:05:00 UTC (1757718300) Sep 12 23:05:00.905867 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 23:05:00.905887 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 23:05:00.905899 kernel: efifb: probing for efifb Sep 12 23:05:00.905911 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 23:05:00.905923 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 23:05:00.905948 kernel: efifb: scrolling: redraw Sep 12 23:05:00.905960 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 23:05:00.905972 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 23:05:00.905988 kernel: fb0: EFI VGA frame buffer device Sep 12 23:05:00.906003 kernel: pstore: Using crash dump compression: deflate Sep 12 23:05:00.906014 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 23:05:00.906026 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:05:00.906042 kernel: Segment Routing with IPv6 Sep 12 23:05:00.906058 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:05:00.906072 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:05:00.906087 kernel: Key type dns_resolver registered Sep 12 23:05:00.906099 kernel: IPI shorthand broadcast: enabled Sep 12 23:05:00.906111 kernel: sched_clock: Marking stable (3787005488, 143189950)->(4013908261, -83712823) Sep 12 23:05:00.906122 kernel: registered taskstats version 1 Sep 12 23:05:00.906134 kernel: Loading compiled-in X.509 certificates Sep 12 23:05:00.906146 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 12 23:05:00.906157 kernel: Demotion targets for Node 0: null Sep 12 23:05:00.906169 kernel: Key type .fscrypt registered Sep 12 23:05:00.906183 kernel: Key type fscrypt-provisioning registered Sep 12 23:05:00.906197 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:05:00.906209 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:05:00.906220 kernel: ima: No architecture policies found Sep 12 23:05:00.906231 kernel: clk: Disabling unused clocks Sep 12 23:05:00.906243 kernel: Warning: unable to open an initial console. Sep 12 23:05:00.906255 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 12 23:05:00.906266 kernel: Write protecting the kernel read-only data: 24576k Sep 12 23:05:00.906280 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 12 23:05:00.906292 kernel: Run /init as init process Sep 12 23:05:00.906303 kernel: with arguments: Sep 12 23:05:00.906314 kernel: /init Sep 12 23:05:00.906326 kernel: with environment: Sep 12 23:05:00.906337 kernel: HOME=/ Sep 12 23:05:00.906348 kernel: TERM=linux Sep 12 23:05:00.906363 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:05:00.906376 systemd[1]: Successfully made /usr/ read-only. Sep 12 23:05:00.906402 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:05:00.906419 systemd[1]: Detected virtualization kvm. Sep 12 23:05:00.906431 systemd[1]: Detected architecture x86-64. Sep 12 23:05:00.906443 systemd[1]: Running in initrd. Sep 12 23:05:00.906455 systemd[1]: No hostname configured, using default hostname. Sep 12 23:05:00.906468 systemd[1]: Hostname set to . Sep 12 23:05:00.906480 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:05:00.906495 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:05:00.906507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:05:00.906521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:05:00.906534 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:05:00.906546 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:05:00.906559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:05:00.906572 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:05:00.906607 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:05:00.906631 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:05:00.906644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:05:00.906657 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:05:00.906670 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:05:00.906682 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:05:00.906695 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:05:00.906707 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:05:00.906724 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:05:00.906736 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:05:00.906749 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:05:00.906773 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 23:05:00.906790 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:05:00.906803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:05:00.906816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:05:00.906828 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:05:00.906841 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:05:00.906860 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:05:00.906872 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:05:00.906885 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 23:05:00.906898 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:05:00.906910 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:05:00.906923 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:05:00.906935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:05:00.906948 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:05:00.906963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:05:00.906976 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:05:00.907025 systemd-journald[219]: Collecting audit messages is disabled. Sep 12 23:05:00.907063 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:05:00.907079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:05:00.907092 systemd-journald[219]: Journal started Sep 12 23:05:00.907121 systemd-journald[219]: Runtime Journal (/run/log/journal/593ea20dd9e14a34acf1e03fe9822051) is 6M, max 48.2M, 42.2M free. Sep 12 23:05:00.894883 systemd-modules-load[221]: Inserted module 'overlay' Sep 12 23:05:00.909980 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:05:00.915449 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:05:00.926708 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:05:00.928429 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 12 23:05:00.929501 kernel: Bridge firewalling registered Sep 12 23:05:00.929944 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:05:00.930369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:05:00.931158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:05:00.934572 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:05:00.936454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:05:00.948127 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 23:05:00.948876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:05:00.950942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:05:00.958859 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:05:00.960447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:05:00.964475 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:05:00.966689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:05:00.997847 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:05:01.027222 systemd-resolved[262]: Positive Trust Anchors: Sep 12 23:05:01.027243 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:05:01.027284 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:05:01.030481 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 12 23:05:01.031745 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:05:01.054552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:05:01.146646 kernel: SCSI subsystem initialized Sep 12 23:05:01.157625 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:05:01.171627 kernel: iscsi: registered transport (tcp) Sep 12 23:05:01.193618 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:05:01.193659 kernel: QLogic iSCSI HBA Driver Sep 12 23:05:01.219264 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:05:01.244319 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:05:01.245861 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:05:01.308892 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:05:01.311683 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:05:01.378649 kernel: raid6: avx2x4 gen() 22678 MB/s Sep 12 23:05:01.395645 kernel: raid6: avx2x2 gen() 28372 MB/s Sep 12 23:05:01.412769 kernel: raid6: avx2x1 gen() 24889 MB/s Sep 12 23:05:01.412839 kernel: raid6: using algorithm avx2x2 gen() 28372 MB/s Sep 12 23:05:01.430749 kernel: raid6: .... xor() 16803 MB/s, rmw enabled Sep 12 23:05:01.430823 kernel: raid6: using avx2x2 recovery algorithm Sep 12 23:05:01.452645 kernel: xor: automatically using best checksumming function avx Sep 12 23:05:01.632649 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:05:01.642539 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:05:01.645966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:05:01.687953 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 12 23:05:01.693731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:05:01.697709 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:05:01.723986 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 12 23:05:01.757711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:05:01.761473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:05:01.852109 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:05:01.855828 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:05:01.895641 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 23:05:01.899640 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:05:01.905715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:05:01.905816 kernel: GPT:9289727 != 19775487 Sep 12 23:05:01.905834 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:05:01.905849 kernel: GPT:9289727 != 19775487 Sep 12 23:05:01.905862 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:05:01.905875 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:05:01.918614 kernel: libata version 3.00 loaded. Sep 12 23:05:01.923613 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 23:05:01.925627 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 23:05:01.926613 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 23:05:01.927613 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 23:05:01.932755 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 23:05:01.932989 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 23:05:01.933162 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 23:05:01.938361 kernel: scsi host0: ahci Sep 12 23:05:01.938633 kernel: scsi host1: ahci Sep 12 23:05:01.938826 kernel: scsi host2: ahci Sep 12 23:05:01.939411 kernel: scsi host3: ahci Sep 12 23:05:01.942131 kernel: scsi host4: ahci Sep 12 23:05:01.942315 kernel: AES CTR mode by8 optimization enabled Sep 12 23:05:01.941917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:05:01.942075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:05:01.944572 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:05:01.956070 kernel: scsi host5: ahci Sep 12 23:05:01.956325 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 23:05:01.956343 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 23:05:01.956358 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 23:05:01.956371 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 23:05:01.956384 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 23:05:01.956397 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 23:05:01.956797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:05:01.958498 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:05:01.975824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:05:01.975972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:05:02.000275 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:05:02.022922 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:05:02.032540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:05:02.040415 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:05:02.041686 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:05:02.045088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:05:02.048708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:05:02.071049 disk-uuid[620]: Primary Header is updated. Sep 12 23:05:02.071049 disk-uuid[620]: Secondary Entries is updated. Sep 12 23:05:02.071049 disk-uuid[620]: Secondary Header is updated. Sep 12 23:05:02.075606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:05:02.079386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:05:02.260639 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 23:05:02.260730 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 23:05:02.261619 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 23:05:02.263259 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 23:05:02.263353 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 23:05:02.263370 kernel: ata3.00: applying bridge limits Sep 12 23:05:02.265105 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 23:05:02.265138 kernel: ata3.00: configured for UDMA/100 Sep 12 23:05:02.265619 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 23:05:02.267622 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 23:05:02.271619 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 23:05:02.271644 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 23:05:02.318627 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 23:05:02.319019 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:05:02.339984 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 23:05:02.658064 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:05:02.660389 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:05:02.661769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:05:02.664336 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:05:02.667779 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:05:02.693287 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:05:03.083654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:05:03.084314 disk-uuid[623]: The operation has completed successfully. Sep 12 23:05:03.112264 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:05:03.112429 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:05:03.158157 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:05:03.186350 sh[665]: Success Sep 12 23:05:03.208992 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:05:03.209069 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:05:03.210208 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 23:05:03.221623 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 23:05:03.256105 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:05:03.258576 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:05:03.280964 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:05:03.288541 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (677) Sep 12 23:05:03.288607 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 12 23:05:03.288619 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:05:03.295109 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:05:03.295171 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 23:05:03.296670 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:05:03.298383 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:05:03.300416 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:05:03.301622 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:05:03.317139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:05:03.337630 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (700) Sep 12 23:05:03.340768 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:05:03.340835 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:05:03.344626 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:05:03.344653 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:05:03.350705 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:05:03.352459 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:05:03.356478 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:05:03.495316 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:05:03.553800 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:05:03.579082 ignition[741]: Ignition 2.22.0 Sep 12 23:05:03.579478 ignition[741]: Stage: fetch-offline Sep 12 23:05:03.579525 ignition[741]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:05:03.579534 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:05:03.579634 ignition[741]: parsed url from cmdline: "" Sep 12 23:05:03.579637 ignition[741]: no config URL provided Sep 12 23:05:03.579642 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:05:03.579650 ignition[741]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:05:03.579673 ignition[741]: op(1): [started] loading QEMU firmware config module Sep 12 23:05:03.579678 ignition[741]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:05:03.585751 ignition[741]: op(1): [finished] loading QEMU firmware config module Sep 12 23:05:03.607547 systemd-networkd[851]: lo: Link UP Sep 12 23:05:03.607560 systemd-networkd[851]: lo: Gained carrier Sep 12 23:05:03.609792 systemd-networkd[851]: Enumeration completed Sep 12 23:05:03.610243 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:05:03.610249 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:05:03.610498 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:05:03.611786 systemd-networkd[851]: eth0: Link UP Sep 12 23:05:03.611970 systemd-networkd[851]: eth0: Gained carrier Sep 12 23:05:03.611981 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:05:03.612785 systemd[1]: Reached target network.target - Network. Sep 12 23:05:03.634691 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:05:03.639469 ignition[741]: parsing config with SHA512: 37b962520f01324c2568f9691422826d8e1fb4caef0929dc5c82db3e1bc0bbdd7afd54b6668f48e1c91296c01e0b31d6d1316f99a59a26e2e7d4a640009748ec Sep 12 23:05:03.647128 unknown[741]: fetched base config from "system" Sep 12 23:05:03.647143 unknown[741]: fetched user config from "qemu" Sep 12 23:05:03.647991 ignition[741]: fetch-offline: fetch-offline passed Sep 12 23:05:03.648275 ignition[741]: Ignition finished successfully Sep 12 23:05:03.652297 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:05:03.654885 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:05:03.655945 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:05:03.723946 ignition[859]: Ignition 2.22.0 Sep 12 23:05:03.723962 ignition[859]: Stage: kargs Sep 12 23:05:03.724119 ignition[859]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:05:03.724130 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:05:03.732068 ignition[859]: kargs: kargs passed Sep 12 23:05:03.732164 ignition[859]: Ignition finished successfully Sep 12 23:05:03.738663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:05:03.740140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:05:03.827235 ignition[868]: Ignition 2.22.0 Sep 12 23:05:03.827248 ignition[868]: Stage: disks Sep 12 23:05:03.827397 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:05:03.827407 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:05:03.828266 ignition[868]: disks: disks passed Sep 12 23:05:03.828322 ignition[868]: Ignition finished successfully Sep 12 23:05:03.834537 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:05:03.834896 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:05:03.837638 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:05:03.837882 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:05:03.838232 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:05:03.838580 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:05:03.848555 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:05:03.881391 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 23:05:03.892645 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:05:03.899313 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:05:04.081759 kernel: EXT4-fs (vda9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 12 23:05:04.082425 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:05:04.083125 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:05:04.084882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:05:04.088209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:05:04.090460 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:05:04.090523 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:05:04.090616 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:05:04.104512 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:05:04.107762 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:05:04.110814 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 12 23:05:04.113105 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:05:04.113134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:05:04.117231 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:05:04.117264 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:05:04.120035 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:05:04.191343 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:05:04.199494 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:05:04.204558 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:05:04.210976 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:05:04.326672 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:05:04.329068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:05:04.330164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:05:04.359463 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:05:04.360866 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:05:04.377923 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:05:04.410186 ignition[999]: INFO : Ignition 2.22.0 Sep 12 23:05:04.410186 ignition[999]: INFO : Stage: mount Sep 12 23:05:04.412105 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:05:04.412105 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:05:04.412105 ignition[999]: INFO : mount: mount passed Sep 12 23:05:04.412105 ignition[999]: INFO : Ignition finished successfully Sep 12 23:05:04.419125 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:05:04.421569 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:05:04.465868 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:05:04.485960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 12 23:05:04.488195 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:05:04.488298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:05:04.491620 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:05:04.491673 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:05:04.494022 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:05:04.543272 ignition[1029]: INFO : Ignition 2.22.0 Sep 12 23:05:04.543272 ignition[1029]: INFO : Stage: files Sep 12 23:05:04.545636 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:05:04.545636 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:05:04.545636 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:05:04.545636 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:05:04.545636 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:05:04.553561 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:05:04.553561 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:05:04.553561 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:05:04.553561 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 23:05:04.553561 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 23:05:04.548391 unknown[1029]: wrote ssh authorized keys file for user: core Sep 12 23:05:04.932882 systemd-networkd[851]: eth0: Gained IPv6LL Sep 12 23:05:05.680402 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:05:06.026513 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 23:05:06.026513 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:05:06.031041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:05:06.031041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:05:06.031041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:05:06.031041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:05:06.031041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:05:06.031041 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:05:06.044913 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:05:06.050605 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:05:06.052841 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:05:06.052841 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:05:06.061881 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:05:06.061881 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:05:06.067712 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 23:05:06.375625 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:05:07.264405 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 23:05:07.264405 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:05:07.271293 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:05:07.347642 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:05:07.347642 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:05:07.347642 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 23:05:07.353835 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:05:07.353835 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:05:07.353835 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 23:05:07.353835 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:05:07.376139 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:05:07.381285 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:05:07.383295 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:05:07.383295 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:05:07.383295 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:05:07.383295 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:05:07.383295 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:05:07.383295 ignition[1029]: INFO : files: files passed Sep 12 23:05:07.383295 ignition[1029]: INFO : Ignition finished successfully Sep 12 23:05:07.391137 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:05:07.396221 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:05:07.399060 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:05:07.418014 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:05:07.418187 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:05:07.421912 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:05:07.423561 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:05:07.423561 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:05:07.429027 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:05:07.431091 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:05:07.433185 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:05:07.436141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:05:07.493825 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:05:07.493975 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:05:07.498074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:05:07.499154 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:05:07.501296 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:05:07.502503 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:05:07.535312 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:05:07.538931 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:05:07.620954 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:05:07.622635 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:05:07.623174 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:05:07.623576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:05:07.623768 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:05:07.629728 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:05:07.630916 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:05:07.631252 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:05:07.631542 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:05:07.632060 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:05:07.632368 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:05:07.632865 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:05:07.633187 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:05:07.633580 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:05:07.634064 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:05:07.634361 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:05:07.634862 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:05:07.635030 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:05:07.657357 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:05:07.658772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:05:07.659971 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:05:07.660131 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:05:07.662350 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:05:07.662537 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:05:07.666894 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:05:07.667067 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:05:07.668021 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:05:07.668278 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:05:07.668642 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:05:07.672056 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:05:07.674445 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:05:07.674972 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:05:07.675109 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:05:07.679054 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:05:07.679168 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:05:07.681739 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:05:07.681894 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:05:07.682760 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:05:07.682899 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:05:07.686748 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:05:07.689343 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:05:07.689509 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:05:07.691634 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:05:07.693925 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:05:07.694075 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:05:07.695522 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:05:07.695690 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:05:07.702661 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:05:07.713861 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:05:07.740947 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:05:07.828139 ignition[1085]: INFO : Ignition 2.22.0 Sep 12 23:05:07.828139 ignition[1085]: INFO : Stage: umount Sep 12 23:05:07.830185 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:05:07.830185 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:05:07.830185 ignition[1085]: INFO : umount: umount passed Sep 12 23:05:07.830185 ignition[1085]: INFO : Ignition finished successfully Sep 12 23:05:07.833191 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:05:07.833356 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:05:07.835024 systemd[1]: Stopped target network.target - Network. Sep 12 23:05:07.836830 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:05:07.836900 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:05:07.837032 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:05:07.837084 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:05:07.837379 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:05:07.837436 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:05:07.837947 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:05:07.837998 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:05:07.838398 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:05:07.838958 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:05:07.845206 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:05:07.845376 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:05:07.850899 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 23:05:07.851365 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:05:07.851431 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:05:07.855184 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:05:07.857273 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:05:07.857437 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:05:07.862477 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 23:05:07.862894 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 23:05:07.865221 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:05:07.865267 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:05:07.869204 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:05:07.870495 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:05:07.870634 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:05:07.873575 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:05:07.873644 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:05:07.876382 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:05:07.876435 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:05:07.878676 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:05:07.890304 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 23:05:07.898066 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:05:07.898273 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:05:07.901547 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:05:07.901668 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:05:07.903755 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:05:07.903796 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:05:07.905725 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:05:07.905779 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:05:07.907121 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:05:07.907180 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:05:07.907915 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:05:07.907974 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:05:07.916872 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:05:07.918020 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 23:05:07.918077 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:05:07.921695 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:05:07.921755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:05:07.927104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:05:07.927174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:05:07.931522 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:05:07.931718 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:05:07.932974 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:05:07.933070 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:05:07.977703 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:05:07.977877 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:05:07.979223 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:05:07.981898 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:05:07.981967 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:05:07.985278 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:05:08.007645 systemd[1]: Switching root. Sep 12 23:05:08.042515 systemd-journald[219]: Journal stopped Sep 12 23:05:09.436718 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 12 23:05:09.436785 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:05:09.436803 kernel: SELinux: policy capability open_perms=1 Sep 12 23:05:09.436820 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:05:09.436831 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:05:09.436850 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:05:09.436862 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:05:09.436873 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:05:09.436884 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:05:09.436896 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 23:05:09.436912 kernel: audit: type=1403 audit(1757718308.507:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:05:09.436924 systemd[1]: Successfully loaded SELinux policy in 68.893ms. Sep 12 23:05:09.436944 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.206ms. Sep 12 23:05:09.436961 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:05:09.437018 systemd[1]: Detected virtualization kvm. Sep 12 23:05:09.437030 systemd[1]: Detected architecture x86-64. Sep 12 23:05:09.437042 systemd[1]: Detected first boot. Sep 12 23:05:09.437054 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:05:09.437067 zram_generator::config[1133]: No configuration found. Sep 12 23:05:09.437085 kernel: Guest personality initialized and is inactive Sep 12 23:05:09.437100 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 23:05:09.437116 kernel: Initialized host personality Sep 12 23:05:09.437134 kernel: NET: Registered PF_VSOCK protocol family Sep 12 23:05:09.437149 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:05:09.437166 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 23:05:09.437182 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:05:09.437204 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:05:09.437220 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:05:09.437236 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:05:09.437252 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:05:09.437268 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:05:09.437287 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:05:09.437304 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:05:09.437319 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:05:09.437335 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:05:09.437363 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:05:09.437378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:05:09.437395 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:05:09.437412 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:05:09.437428 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:05:09.437448 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:05:09.437465 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:05:09.437481 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:05:09.437497 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:05:09.437522 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:05:09.437538 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:05:09.437554 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:05:09.437573 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:05:09.437619 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:05:09.437636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:05:09.437652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:05:09.437669 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:05:09.437698 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:05:09.437742 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:05:09.437770 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:05:09.437790 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 23:05:09.437822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:05:09.437838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:05:09.437854 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:05:09.437870 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:05:09.437886 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:05:09.437902 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:05:09.437918 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:05:09.437935 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:09.437951 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:05:09.437970 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:05:09.437986 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:05:09.438003 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:05:09.438020 systemd[1]: Reached target machines.target - Containers. Sep 12 23:05:09.438036 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:05:09.438052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:05:09.438068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:05:09.438084 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:05:09.438100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:05:09.438119 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:05:09.438136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:05:09.438152 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:05:09.438180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:05:09.438204 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:05:09.438221 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:05:09.438237 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:05:09.438253 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:05:09.438272 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:05:09.438290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:05:09.438306 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:05:09.438322 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:05:09.438338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:05:09.438353 kernel: loop: module loaded Sep 12 23:05:09.438364 kernel: ACPI: bus type drm_connector registered Sep 12 23:05:09.438376 kernel: fuse: init (API version 7.41) Sep 12 23:05:09.438388 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:05:09.438403 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 23:05:09.438415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:05:09.438426 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:05:09.438438 systemd[1]: Stopped verity-setup.service. Sep 12 23:05:09.438451 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:09.438465 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:05:09.438477 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:05:09.438498 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:05:09.438548 systemd-journald[1208]: Collecting audit messages is disabled. Sep 12 23:05:09.438574 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:05:09.438603 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:05:09.438620 systemd-journald[1208]: Journal started Sep 12 23:05:09.438646 systemd-journald[1208]: Runtime Journal (/run/log/journal/593ea20dd9e14a34acf1e03fe9822051) is 6M, max 48.2M, 42.2M free. Sep 12 23:05:09.174053 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:05:09.194982 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:05:09.195646 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:05:09.442618 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:05:09.443493 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:05:09.445157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:05:09.446698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:05:09.448205 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:05:09.448425 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:05:09.449900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:05:09.450108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:05:09.451555 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:05:09.451778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:05:09.453269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:05:09.453491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:05:09.455063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:05:09.455322 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:05:09.456942 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:05:09.457157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:05:09.458664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:05:09.460108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:05:09.461772 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:05:09.463336 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 23:05:09.475749 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:05:09.478418 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:05:09.481030 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:05:09.482168 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:05:09.482205 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:05:09.484534 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 23:05:09.496733 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:05:09.497918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:05:09.500543 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:05:09.504973 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:05:09.506301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:05:09.509697 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:05:09.511109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:05:09.514726 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:05:09.519251 systemd-journald[1208]: Time spent on flushing to /var/log/journal/593ea20dd9e14a34acf1e03fe9822051 is 26.724ms for 1037 entries. Sep 12 23:05:09.519251 systemd-journald[1208]: System Journal (/var/log/journal/593ea20dd9e14a34acf1e03fe9822051) is 8M, max 195.6M, 187.6M free. Sep 12 23:05:09.571935 systemd-journald[1208]: Received client request to flush runtime journal. Sep 12 23:05:09.571995 kernel: loop0: detected capacity change from 0 to 110984 Sep 12 23:05:09.520823 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:05:09.524332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:05:09.528443 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:05:09.530122 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:05:09.538912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:05:09.545070 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:05:09.550245 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:05:09.554856 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 23:05:09.559349 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:05:09.574139 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:05:09.587617 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:05:09.598780 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:05:09.600552 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 23:05:09.603248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:05:09.606797 kernel: loop1: detected capacity change from 0 to 221472 Sep 12 23:05:09.635626 kernel: loop2: detected capacity change from 0 to 128016 Sep 12 23:05:09.640452 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 12 23:05:09.640867 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 12 23:05:09.648062 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:05:09.698749 kernel: loop3: detected capacity change from 0 to 110984 Sep 12 23:05:09.714622 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 23:05:09.726617 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 23:05:09.736961 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:05:09.737673 (sd-merge)[1275]: Merged extensions into '/usr'. Sep 12 23:05:09.742103 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:05:09.742119 systemd[1]: Reloading... Sep 12 23:05:09.861678 zram_generator::config[1300]: No configuration found. Sep 12 23:05:10.176762 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:05:10.177161 systemd[1]: Reloading finished in 434 ms. Sep 12 23:05:10.198282 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:05:10.201621 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:05:10.204014 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:05:10.217163 systemd[1]: Starting ensure-sysext.service... Sep 12 23:05:10.220107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:05:10.233529 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:05:10.233554 systemd[1]: Reloading... Sep 12 23:05:10.309542 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 23:05:10.311627 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 23:05:10.312791 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:05:10.315003 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:05:10.321337 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:05:10.321894 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 12 23:05:10.322007 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 12 23:05:10.333180 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:05:10.334560 systemd-tmpfiles[1339]: Skipping /boot Sep 12 23:05:10.336608 zram_generator::config[1368]: No configuration found. Sep 12 23:05:10.348272 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:05:10.348403 systemd-tmpfiles[1339]: Skipping /boot Sep 12 23:05:10.563234 systemd[1]: Reloading finished in 328 ms. Sep 12 23:05:10.585967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:05:10.606852 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:05:10.628749 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:05:10.631525 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:05:10.637872 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:05:10.642681 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:05:10.650336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:10.650527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:05:10.654078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:05:10.667518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:05:10.716909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:05:10.718240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:05:10.718376 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:05:10.722111 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:05:10.725988 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:10.727886 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:05:10.729873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:05:10.730093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:05:10.731733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:05:10.731947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:05:10.737227 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:05:10.737498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:05:10.744944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:10.745207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:05:10.746999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:05:10.749334 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:05:10.751676 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:05:10.752783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:05:10.752962 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:05:10.753126 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:10.769670 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:05:10.772733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:05:10.773025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:05:10.775087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:05:10.775344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:05:10.777319 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:05:10.777852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:05:10.788002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:10.788280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:05:10.790371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:05:10.811970 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:05:10.816436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:05:10.822058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:05:10.824009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:05:10.824186 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:05:10.824385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:05:10.825774 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:05:10.834801 systemd[1]: Finished ensure-sysext.service. Sep 12 23:05:10.837983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:05:10.842059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:05:10.847354 augenrules[1451]: No rules Sep 12 23:05:10.849765 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:05:10.852056 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:05:10.852450 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:05:10.858481 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:05:10.859126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:05:10.861283 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:05:10.863068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:05:10.863342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:05:10.865191 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:05:10.865458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:05:10.873232 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:05:10.873337 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:05:10.879853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:05:10.885771 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:05:10.902917 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:05:10.908035 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:05:10.921192 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:05:10.931244 systemd-resolved[1411]: Positive Trust Anchors: Sep 12 23:05:10.931265 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:05:10.931295 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:05:10.935149 systemd-udevd[1464]: Using default interface naming scheme 'v255'. Sep 12 23:05:10.935703 systemd-resolved[1411]: Defaulting to hostname 'linux'. Sep 12 23:05:10.937248 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:05:10.938872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:05:10.953570 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:05:10.955923 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:05:10.962381 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:05:10.964185 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:05:10.965668 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:05:10.967214 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:05:10.969441 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 23:05:10.971080 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:05:10.974351 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:05:10.976058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:05:10.977732 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:05:10.977764 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:05:10.979022 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:05:10.981331 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:05:10.987257 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:05:10.995723 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 23:05:10.998292 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 23:05:10.999808 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 23:05:11.004445 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:05:11.005947 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 23:05:11.012853 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:05:11.016957 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:05:11.035276 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:05:11.036484 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:05:11.037869 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:05:11.037896 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:05:11.039832 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:05:11.043907 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:05:11.048653 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:05:11.051226 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:05:11.052731 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:05:11.053963 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 23:05:11.129197 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:05:11.132796 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:05:11.134121 jq[1501]: false Sep 12 23:05:11.136780 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:05:11.140373 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:05:11.152521 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:05:11.154733 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:05:11.155313 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:05:11.156714 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:05:11.158769 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:05:11.161643 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:05:11.163699 extend-filesystems[1502]: Found /dev/vda6 Sep 12 23:05:11.163341 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:05:11.163609 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:05:11.169192 extend-filesystems[1502]: Found /dev/vda9 Sep 12 23:05:11.179199 extend-filesystems[1502]: Checking size of /dev/vda9 Sep 12 23:05:11.179634 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:05:11.181521 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing passwd entry cache Sep 12 23:05:11.181550 oslogin_cache_refresh[1503]: Refreshing passwd entry cache Sep 12 23:05:11.184878 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:05:11.194874 jq[1518]: true Sep 12 23:05:11.206073 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting users, quitting Sep 12 23:05:11.212961 extend-filesystems[1502]: Resized partition /dev/vda9 Sep 12 23:05:11.216419 oslogin_cache_refresh[1503]: Failure getting users, quitting Sep 12 23:05:11.217327 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 23:05:11.217327 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Refreshing group entry cache Sep 12 23:05:11.216523 oslogin_cache_refresh[1503]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 23:05:11.216635 oslogin_cache_refresh[1503]: Refreshing group entry cache Sep 12 23:05:11.223640 tar[1520]: linux-amd64/helm Sep 12 23:05:11.221497 oslogin_cache_refresh[1503]: Failure getting groups, quitting Sep 12 23:05:11.224133 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Failure getting groups, quitting Sep 12 23:05:11.224133 google_oslogin_nss_cache[1503]: oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 23:05:11.221511 oslogin_cache_refresh[1503]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 23:05:11.228038 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:05:11.228318 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:05:11.232619 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 23:05:11.233768 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 23:05:11.234558 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 23:05:11.239645 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:05:11.257899 update_engine[1516]: I20250912 23:05:11.257810 1516 main.cc:92] Flatcar Update Engine starting Sep 12 23:05:11.290504 jq[1534]: true Sep 12 23:05:11.316705 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:05:11.323379 dbus-daemon[1499]: [system] SELinux support is enabled Sep 12 23:05:11.348578 update_engine[1516]: I20250912 23:05:11.341851 1516 update_check_scheduler.cc:74] Next update check in 5m39s Sep 12 23:05:11.324756 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:05:11.330616 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:05:11.330651 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:05:11.332328 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:05:11.332349 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:05:11.341210 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:05:11.349802 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:05:11.350479 systemd-logind[1515]: New seat seat0. Sep 12 23:05:11.353866 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:05:11.353866 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:05:11.353866 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:05:11.357954 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Sep 12 23:05:11.356705 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:05:11.357109 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:05:11.365530 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:05:11.391918 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:05:11.393363 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:05:11.396259 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:05:11.400915 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:05:11.409756 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:05:11.412112 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 23:05:11.412865 systemd-networkd[1498]: lo: Link UP Sep 12 23:05:11.412879 systemd-networkd[1498]: lo: Gained carrier Sep 12 23:05:11.416331 systemd-networkd[1498]: Enumeration completed Sep 12 23:05:11.416700 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:05:11.418706 systemd[1]: Reached target network.target - Network. Sep 12 23:05:11.424814 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:05:11.424825 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:05:11.426445 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:05:11.426516 systemd-networkd[1498]: eth0: Link UP Sep 12 23:05:11.426806 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:05:11.430728 systemd-networkd[1498]: eth0: Gained carrier Sep 12 23:05:11.431700 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 23:05:11.431898 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:05:11.436977 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:05:11.520656 systemd-networkd[1498]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:05:11.522165 systemd-timesyncd[1457]: Network configuration changed, trying to establish connection. Sep 12 23:05:11.523621 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:05:11.525649 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 23:05:11.546553 kernel: ACPI: button: Power Button [PWRF] Sep 12 23:05:11.529700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:05:12.795618 systemd-resolved[1411]: Clock change detected. Flushing caches. Sep 12 23:05:12.795913 systemd-timesyncd[1457]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:05:12.795963 systemd-timesyncd[1457]: Initial clock synchronization to Fri 2025-09-12 23:05:12.795564 UTC. Sep 12 23:05:12.886481 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 23:05:12.887148 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 23:05:12.887307 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 23:05:12.892547 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:05:12.897624 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:05:12.904802 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 23:05:12.975841 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:05:12.979608 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:05:13.000439 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:05:13.009292 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:05:13.010959 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:05:13.072053 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:05:13.192042 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:05:13.206095 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:05:13.233832 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:05:13.235767 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:05:13.280311 kernel: kvm_amd: TSC scaling supported Sep 12 23:05:13.280398 kernel: kvm_amd: Nested Virtualization enabled Sep 12 23:05:13.280414 kernel: kvm_amd: Nested Paging enabled Sep 12 23:05:13.280428 kernel: kvm_amd: LBR virtualization supported Sep 12 23:05:13.282197 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 23:05:13.282235 kernel: kvm_amd: Virtual GIF supported Sep 12 23:05:13.354350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:05:13.375252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:05:13.375517 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:05:13.380715 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:05:13.439538 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 23:05:13.531928 systemd-logind[1515]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 23:05:13.547159 kernel: EDAC MC: Ver: 3.0.0 Sep 12 23:05:13.568908 containerd[1580]: time="2025-09-12T23:05:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 23:05:13.570432 containerd[1580]: time="2025-09-12T23:05:13.570360918Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 23:05:13.582012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:05:13.583383 tar[1520]: linux-amd64/LICENSE Sep 12 23:05:13.583923 tar[1520]: linux-amd64/README.md Sep 12 23:05:13.590325 containerd[1580]: time="2025-09-12T23:05:13.590267881Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.281µs" Sep 12 23:05:13.590325 containerd[1580]: time="2025-09-12T23:05:13.590308147Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 23:05:13.590325 containerd[1580]: time="2025-09-12T23:05:13.590329517Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 23:05:13.590563 containerd[1580]: time="2025-09-12T23:05:13.590532277Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 23:05:13.590563 containerd[1580]: time="2025-09-12T23:05:13.590549399Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 23:05:13.590657 containerd[1580]: time="2025-09-12T23:05:13.590578864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:05:13.590657 containerd[1580]: time="2025-09-12T23:05:13.590648635Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:05:13.590755 containerd[1580]: time="2025-09-12T23:05:13.590659485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591168 containerd[1580]: time="2025-09-12T23:05:13.591128765Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591168 containerd[1580]: time="2025-09-12T23:05:13.591147400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591168 containerd[1580]: time="2025-09-12T23:05:13.591157810Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591275 containerd[1580]: time="2025-09-12T23:05:13.591179551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591323 containerd[1580]: time="2025-09-12T23:05:13.591306719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591639 containerd[1580]: time="2025-09-12T23:05:13.591599779Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591734 containerd[1580]: time="2025-09-12T23:05:13.591668257Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:05:13.591734 containerd[1580]: time="2025-09-12T23:05:13.591700247Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 23:05:13.591914 containerd[1580]: time="2025-09-12T23:05:13.591760801Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 23:05:13.592105 containerd[1580]: time="2025-09-12T23:05:13.592089587Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 23:05:13.592178 containerd[1580]: time="2025-09-12T23:05:13.592161332Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:05:13.599456 containerd[1580]: time="2025-09-12T23:05:13.599399987Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 23:05:13.599535 containerd[1580]: time="2025-09-12T23:05:13.599480959Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 23:05:13.599535 containerd[1580]: time="2025-09-12T23:05:13.599495867Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 23:05:13.599535 containerd[1580]: time="2025-09-12T23:05:13.599520453Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 23:05:13.599535 containerd[1580]: time="2025-09-12T23:05:13.599532726Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 23:05:13.599750 containerd[1580]: time="2025-09-12T23:05:13.599546792Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 23:05:13.599750 containerd[1580]: time="2025-09-12T23:05:13.599580666Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 23:05:13.599750 containerd[1580]: time="2025-09-12T23:05:13.599604040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 23:05:13.599750 containerd[1580]: time="2025-09-12T23:05:13.599627113Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 23:05:13.599750 containerd[1580]: time="2025-09-12T23:05:13.599655967Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 23:05:13.599750 containerd[1580]: time="2025-09-12T23:05:13.599672468Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 23:05:13.599750 containerd[1580]: time="2025-09-12T23:05:13.599699248Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.599897660Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.599934660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.599950489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.599960839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.599970266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.599986196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.600005773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 23:05:13.600026 containerd[1580]: time="2025-09-12T23:05:13.600028736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 23:05:13.600304 containerd[1580]: time="2025-09-12T23:05:13.600042251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 23:05:13.600304 containerd[1580]: time="2025-09-12T23:05:13.600069593Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 23:05:13.600304 containerd[1580]: time="2025-09-12T23:05:13.600094640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 23:05:13.600304 containerd[1580]: time="2025-09-12T23:05:13.600228180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 23:05:13.600304 containerd[1580]: time="2025-09-12T23:05:13.600270700Z" level=info msg="Start snapshots syncer" Sep 12 23:05:13.600304 containerd[1580]: time="2025-09-12T23:05:13.600296518Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 23:05:13.600825 containerd[1580]: time="2025-09-12T23:05:13.600760749Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 23:05:13.601020 containerd[1580]: time="2025-09-12T23:05:13.600830249Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 23:05:13.601020 containerd[1580]: time="2025-09-12T23:05:13.600964291Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 23:05:13.601096 containerd[1580]: time="2025-09-12T23:05:13.601074838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 23:05:13.601126 containerd[1580]: time="2025-09-12T23:05:13.601096539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 23:05:13.601126 containerd[1580]: time="2025-09-12T23:05:13.601106477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 23:05:13.601126 containerd[1580]: time="2025-09-12T23:05:13.601117187Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 23:05:13.601225 containerd[1580]: time="2025-09-12T23:05:13.601131344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 23:05:13.601225 containerd[1580]: time="2025-09-12T23:05:13.601141323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 23:05:13.601225 containerd[1580]: time="2025-09-12T23:05:13.601150911Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 23:05:13.601225 containerd[1580]: time="2025-09-12T23:05:13.601183862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 23:05:13.601225 containerd[1580]: time="2025-09-12T23:05:13.601194302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 23:05:13.601225 containerd[1580]: time="2025-09-12T23:05:13.601204030Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 23:05:13.601380 containerd[1580]: time="2025-09-12T23:05:13.601244015Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:05:13.601380 containerd[1580]: time="2025-09-12T23:05:13.601258302Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:05:13.601380 containerd[1580]: time="2025-09-12T23:05:13.601274031Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:05:13.601380 containerd[1580]: time="2025-09-12T23:05:13.601283129Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:05:13.601380 containerd[1580]: time="2025-09-12T23:05:13.601290422Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 23:05:13.601380 containerd[1580]: time="2025-09-12T23:05:13.601303587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 23:05:13.601380 containerd[1580]: time="2025-09-12T23:05:13.601313836Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 23:05:13.601801 containerd[1580]: time="2025-09-12T23:05:13.601736689Z" level=info msg="runtime interface created" Sep 12 23:05:13.602351 containerd[1580]: time="2025-09-12T23:05:13.602306337Z" level=info msg="created NRI interface" Sep 12 23:05:13.602351 containerd[1580]: time="2025-09-12T23:05:13.602341984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 23:05:13.602457 containerd[1580]: time="2025-09-12T23:05:13.602378092Z" level=info msg="Connect containerd service" Sep 12 23:05:13.602580 containerd[1580]: time="2025-09-12T23:05:13.602538232Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:05:13.604160 containerd[1580]: time="2025-09-12T23:05:13.604099991Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:05:13.611407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:05:13.806972 containerd[1580]: time="2025-09-12T23:05:13.806761203Z" level=info msg="Start subscribing containerd event" Sep 12 23:05:13.806972 containerd[1580]: time="2025-09-12T23:05:13.806923938Z" level=info msg="Start recovering state" Sep 12 23:05:13.807155 containerd[1580]: time="2025-09-12T23:05:13.807134083Z" level=info msg="Start event monitor" Sep 12 23:05:13.807291 containerd[1580]: time="2025-09-12T23:05:13.807248908Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:05:13.807291 containerd[1580]: time="2025-09-12T23:05:13.807265609Z" level=info msg="Start streaming server" Sep 12 23:05:13.807365 containerd[1580]: time="2025-09-12T23:05:13.807152287Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:05:13.807391 containerd[1580]: time="2025-09-12T23:05:13.807379072Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:05:13.807440 containerd[1580]: time="2025-09-12T23:05:13.807285887Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 23:05:13.807534 containerd[1580]: time="2025-09-12T23:05:13.807504597Z" level=info msg="runtime interface starting up..." Sep 12 23:05:13.807534 containerd[1580]: time="2025-09-12T23:05:13.807520237Z" level=info msg="starting plugins..." Sep 12 23:05:13.807612 containerd[1580]: time="2025-09-12T23:05:13.807548860Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 23:05:13.807771 containerd[1580]: time="2025-09-12T23:05:13.807753775Z" level=info msg="containerd successfully booted in 0.239433s" Sep 12 23:05:13.808058 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:05:14.244129 systemd-networkd[1498]: eth0: Gained IPv6LL Sep 12 23:05:14.247541 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:05:14.249330 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:05:14.252119 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:05:14.254663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:05:14.256989 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:05:14.292287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:05:14.325280 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:05:14.325622 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:05:14.327629 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:05:15.155369 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:05:15.158155 systemd[1]: Started sshd@0-10.0.0.126:22-10.0.0.1:32788.service - OpenSSH per-connection server daemon (10.0.0.1:32788). Sep 12 23:05:15.261880 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 32788 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:15.264535 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:15.273187 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:05:15.276153 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:05:15.287921 systemd-logind[1515]: New session 1 of user core. Sep 12 23:05:15.299532 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:05:15.305233 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:05:15.403788 (systemd)[1678]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:05:15.406989 systemd-logind[1515]: New session c1 of user core. Sep 12 23:05:15.595966 systemd[1678]: Queued start job for default target default.target. Sep 12 23:05:15.610328 systemd[1678]: Created slice app.slice - User Application Slice. Sep 12 23:05:15.610942 systemd[1678]: Reached target paths.target - Paths. Sep 12 23:05:15.610998 systemd[1678]: Reached target timers.target - Timers. Sep 12 23:05:15.613157 systemd[1678]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:05:15.669521 systemd[1678]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:05:15.669686 systemd[1678]: Reached target sockets.target - Sockets. Sep 12 23:05:15.669756 systemd[1678]: Reached target basic.target - Basic System. Sep 12 23:05:15.669810 systemd[1678]: Reached target default.target - Main User Target. Sep 12 23:05:15.669866 systemd[1678]: Startup finished in 255ms. Sep 12 23:05:15.670327 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:05:15.682350 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:05:15.753847 systemd[1]: Started sshd@1-10.0.0.126:22-10.0.0.1:32802.service - OpenSSH per-connection server daemon (10.0.0.1:32802). Sep 12 23:05:15.901048 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 32802 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:15.903012 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:15.911212 systemd-logind[1515]: New session 2 of user core. Sep 12 23:05:15.922443 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:05:16.023519 sshd[1692]: Connection closed by 10.0.0.1 port 32802 Sep 12 23:05:16.023952 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Sep 12 23:05:16.162472 systemd[1]: sshd@1-10.0.0.126:22-10.0.0.1:32802.service: Deactivated successfully. Sep 12 23:05:16.164755 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:05:16.165742 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:05:16.169879 systemd[1]: Started sshd@2-10.0.0.126:22-10.0.0.1:32814.service - OpenSSH per-connection server daemon (10.0.0.1:32814). Sep 12 23:05:16.173230 systemd-logind[1515]: Removed session 2. Sep 12 23:05:16.297100 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 32814 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:16.299010 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:16.305050 systemd-logind[1515]: New session 3 of user core. Sep 12 23:05:16.319410 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:05:16.458645 sshd[1701]: Connection closed by 10.0.0.1 port 32814 Sep 12 23:05:16.459384 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Sep 12 23:05:16.463824 systemd[1]: sshd@2-10.0.0.126:22-10.0.0.1:32814.service: Deactivated successfully. Sep 12 23:05:16.465979 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:05:16.466878 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:05:16.468651 systemd-logind[1515]: Removed session 3. Sep 12 23:05:16.537197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:05:16.539337 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:05:16.541994 systemd[1]: Startup finished in 3.865s (kernel) + 7.839s (initrd) + 6.855s (userspace) = 18.560s. Sep 12 23:05:16.543642 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:05:17.620432 kubelet[1711]: E0912 23:05:17.620336 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:05:17.624044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:05:17.624236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:05:17.624685 systemd[1]: kubelet.service: Consumed 2.917s CPU time, 265.8M memory peak. Sep 12 23:05:26.485427 systemd[1]: Started sshd@3-10.0.0.126:22-10.0.0.1:50034.service - OpenSSH per-connection server daemon (10.0.0.1:50034). Sep 12 23:05:26.551777 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 50034 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:26.553657 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:26.559163 systemd-logind[1515]: New session 4 of user core. Sep 12 23:05:26.571206 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:05:26.627471 sshd[1728]: Connection closed by 10.0.0.1 port 50034 Sep 12 23:05:26.628006 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Sep 12 23:05:26.642553 systemd[1]: sshd@3-10.0.0.126:22-10.0.0.1:50034.service: Deactivated successfully. Sep 12 23:05:26.644929 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:05:26.645920 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:05:26.649487 systemd[1]: Started sshd@4-10.0.0.126:22-10.0.0.1:50046.service - OpenSSH per-connection server daemon (10.0.0.1:50046). Sep 12 23:05:26.650515 systemd-logind[1515]: Removed session 4. Sep 12 23:05:26.720624 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 50046 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:26.722798 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:26.728662 systemd-logind[1515]: New session 5 of user core. Sep 12 23:05:26.736032 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:05:26.788030 sshd[1737]: Connection closed by 10.0.0.1 port 50046 Sep 12 23:05:26.788448 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 12 23:05:26.798091 systemd[1]: sshd@4-10.0.0.126:22-10.0.0.1:50046.service: Deactivated successfully. Sep 12 23:05:26.800133 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:05:26.800896 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:05:26.803983 systemd[1]: Started sshd@5-10.0.0.126:22-10.0.0.1:50062.service - OpenSSH per-connection server daemon (10.0.0.1:50062). Sep 12 23:05:26.804825 systemd-logind[1515]: Removed session 5. Sep 12 23:05:26.874368 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 50062 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:26.876147 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:26.881022 systemd-logind[1515]: New session 6 of user core. Sep 12 23:05:26.897081 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:05:26.953380 sshd[1746]: Connection closed by 10.0.0.1 port 50062 Sep 12 23:05:26.953741 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 12 23:05:26.968898 systemd[1]: sshd@5-10.0.0.126:22-10.0.0.1:50062.service: Deactivated successfully. Sep 12 23:05:26.971190 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:05:26.972089 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:05:26.975421 systemd[1]: Started sshd@6-10.0.0.126:22-10.0.0.1:50066.service - OpenSSH per-connection server daemon (10.0.0.1:50066). Sep 12 23:05:26.976106 systemd-logind[1515]: Removed session 6. Sep 12 23:05:27.032608 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 50066 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:27.034930 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:27.039802 systemd-logind[1515]: New session 7 of user core. Sep 12 23:05:27.046992 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:05:27.108644 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:05:27.108997 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:05:27.125125 sudo[1756]: pam_unix(sudo:session): session closed for user root Sep 12 23:05:27.127420 sshd[1755]: Connection closed by 10.0.0.1 port 50066 Sep 12 23:05:27.128087 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 12 23:05:27.143233 systemd[1]: sshd@6-10.0.0.126:22-10.0.0.1:50066.service: Deactivated successfully. Sep 12 23:05:27.145814 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:05:27.146769 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:05:27.150507 systemd[1]: Started sshd@7-10.0.0.126:22-10.0.0.1:50074.service - OpenSSH per-connection server daemon (10.0.0.1:50074). Sep 12 23:05:27.151615 systemd-logind[1515]: Removed session 7. Sep 12 23:05:27.228258 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 50074 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:27.230128 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:27.235734 systemd-logind[1515]: New session 8 of user core. Sep 12 23:05:27.246221 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:05:27.304267 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:05:27.304587 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:05:27.313530 sudo[1767]: pam_unix(sudo:session): session closed for user root Sep 12 23:05:27.320265 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 23:05:27.320605 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:05:27.331304 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:05:27.389347 augenrules[1789]: No rules Sep 12 23:05:27.391148 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:05:27.391439 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:05:27.392732 sudo[1766]: pam_unix(sudo:session): session closed for user root Sep 12 23:05:27.394823 sshd[1765]: Connection closed by 10.0.0.1 port 50074 Sep 12 23:05:27.395343 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Sep 12 23:05:27.406793 systemd[1]: sshd@7-10.0.0.126:22-10.0.0.1:50074.service: Deactivated successfully. Sep 12 23:05:27.408767 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:05:27.409741 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:05:27.413469 systemd[1]: Started sshd@8-10.0.0.126:22-10.0.0.1:50088.service - OpenSSH per-connection server daemon (10.0.0.1:50088). Sep 12 23:05:27.414217 systemd-logind[1515]: Removed session 8. Sep 12 23:05:27.479186 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 50088 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:05:27.481155 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:05:27.486496 systemd-logind[1515]: New session 9 of user core. Sep 12 23:05:27.496044 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:05:27.551025 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:05:27.551328 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:05:27.874743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:05:27.876514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:05:28.182321 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:05:28.188609 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:05:28.424560 dockerd[1826]: time="2025-09-12T23:05:28.424502656Z" level=info msg="Starting up" Sep 12 23:05:28.425412 dockerd[1826]: time="2025-09-12T23:05:28.425378339Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 23:05:28.434666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:05:28.439667 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:05:28.439995 dockerd[1826]: time="2025-09-12T23:05:28.439911273Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 23:05:28.820915 kubelet[1849]: E0912 23:05:28.820756 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:05:28.828485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:05:28.828725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:05:28.829812 systemd[1]: kubelet.service: Consumed 562ms CPU time, 111M memory peak. Sep 12 23:05:28.844169 dockerd[1826]: time="2025-09-12T23:05:28.844112454Z" level=info msg="Loading containers: start." Sep 12 23:05:28.854887 kernel: Initializing XFRM netlink socket Sep 12 23:05:29.530447 systemd-networkd[1498]: docker0: Link UP Sep 12 23:05:29.535441 dockerd[1826]: time="2025-09-12T23:05:29.535381130Z" level=info msg="Loading containers: done." Sep 12 23:05:29.561542 dockerd[1826]: time="2025-09-12T23:05:29.561452914Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:05:29.561774 dockerd[1826]: time="2025-09-12T23:05:29.561601893Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 23:05:29.561774 dockerd[1826]: time="2025-09-12T23:05:29.561740183Z" level=info msg="Initializing buildkit" Sep 12 23:05:29.595629 dockerd[1826]: time="2025-09-12T23:05:29.595553445Z" level=info msg="Completed buildkit initialization" Sep 12 23:05:29.602152 dockerd[1826]: time="2025-09-12T23:05:29.602063864Z" level=info msg="Daemon has completed initialization" Sep 12 23:05:29.602317 dockerd[1826]: time="2025-09-12T23:05:29.602193758Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:05:29.602527 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:05:31.170810 containerd[1580]: time="2025-09-12T23:05:31.170632620Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 23:05:33.327141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703321254.mount: Deactivated successfully. Sep 12 23:05:35.254467 containerd[1580]: time="2025-09-12T23:05:35.254341602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:35.255428 containerd[1580]: time="2025-09-12T23:05:35.255362467Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 23:05:35.257300 containerd[1580]: time="2025-09-12T23:05:35.257256068Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:35.262829 containerd[1580]: time="2025-09-12T23:05:35.262754159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:35.264243 containerd[1580]: time="2025-09-12T23:05:35.264195642Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 4.093437457s" Sep 12 23:05:35.264337 containerd[1580]: time="2025-09-12T23:05:35.264256316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 23:05:35.265477 containerd[1580]: time="2025-09-12T23:05:35.265393408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 23:05:38.036552 containerd[1580]: time="2025-09-12T23:05:38.036392030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:38.038583 containerd[1580]: time="2025-09-12T23:05:38.038498771Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 23:05:38.045025 containerd[1580]: time="2025-09-12T23:05:38.044971670Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:38.052846 containerd[1580]: time="2025-09-12T23:05:38.052758493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:38.054087 containerd[1580]: time="2025-09-12T23:05:38.053949096Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 2.788475928s" Sep 12 23:05:38.054087 containerd[1580]: time="2025-09-12T23:05:38.054014498Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 23:05:38.055045 containerd[1580]: time="2025-09-12T23:05:38.054662223Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 23:05:39.079473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:05:39.081529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:05:39.325393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:05:39.342200 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:05:39.381755 kubelet[2129]: E0912 23:05:39.381659 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:05:39.386530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:05:39.386782 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:05:39.387318 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.8M memory peak. Sep 12 23:05:42.809058 containerd[1580]: time="2025-09-12T23:05:42.808972682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:42.826529 containerd[1580]: time="2025-09-12T23:05:42.826453525Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 23:05:42.847819 containerd[1580]: time="2025-09-12T23:05:42.847757267Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:42.869359 containerd[1580]: time="2025-09-12T23:05:42.869285049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:42.870369 containerd[1580]: time="2025-09-12T23:05:42.870331752Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 4.815643581s" Sep 12 23:05:42.870369 containerd[1580]: time="2025-09-12T23:05:42.870369473Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 23:05:42.870988 containerd[1580]: time="2025-09-12T23:05:42.870967935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 23:05:45.415905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890366645.mount: Deactivated successfully. Sep 12 23:05:47.478105 containerd[1580]: time="2025-09-12T23:05:47.478011305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:47.502200 containerd[1580]: time="2025-09-12T23:05:47.502113809Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 23:05:47.547447 containerd[1580]: time="2025-09-12T23:05:47.547353623Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:47.589687 containerd[1580]: time="2025-09-12T23:05:47.589622737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:47.590449 containerd[1580]: time="2025-09-12T23:05:47.590350988Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 4.719352295s" Sep 12 23:05:47.590449 containerd[1580]: time="2025-09-12T23:05:47.590401766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 23:05:47.590987 containerd[1580]: time="2025-09-12T23:05:47.590944060Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:05:48.947140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345466499.mount: Deactivated successfully. Sep 12 23:05:49.637366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:05:49.639188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:05:49.889981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:05:49.910626 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:05:50.071982 kubelet[2194]: E0912 23:05:50.071919 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:05:50.076002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:05:50.076215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:05:50.076621 systemd[1]: kubelet.service: Consumed 389ms CPU time, 109.3M memory peak. Sep 12 23:05:51.204732 containerd[1580]: time="2025-09-12T23:05:51.204664619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:51.205802 containerd[1580]: time="2025-09-12T23:05:51.205708406Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 23:05:51.207270 containerd[1580]: time="2025-09-12T23:05:51.207240818Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:51.210492 containerd[1580]: time="2025-09-12T23:05:51.210440359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:51.211550 containerd[1580]: time="2025-09-12T23:05:51.211513493Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.62052646s" Sep 12 23:05:51.211550 containerd[1580]: time="2025-09-12T23:05:51.211546005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 23:05:51.212530 containerd[1580]: time="2025-09-12T23:05:51.212469522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:05:51.791150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217896543.mount: Deactivated successfully. Sep 12 23:05:51.798439 containerd[1580]: time="2025-09-12T23:05:51.798375026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:05:51.799187 containerd[1580]: time="2025-09-12T23:05:51.799155700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 23:05:51.800594 containerd[1580]: time="2025-09-12T23:05:51.800565026Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:05:51.803052 containerd[1580]: time="2025-09-12T23:05:51.802991468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:05:51.803548 containerd[1580]: time="2025-09-12T23:05:51.803522014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.004559ms" Sep 12 23:05:51.803590 containerd[1580]: time="2025-09-12T23:05:51.803548234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 23:05:51.804070 containerd[1580]: time="2025-09-12T23:05:51.804032530Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 23:05:52.418562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394896109.mount: Deactivated successfully. Sep 12 23:05:54.513527 containerd[1580]: time="2025-09-12T23:05:54.513447895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:54.514515 containerd[1580]: time="2025-09-12T23:05:54.514489270Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 23:05:54.515934 containerd[1580]: time="2025-09-12T23:05:54.515891233Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:54.518585 containerd[1580]: time="2025-09-12T23:05:54.518557628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:05:54.520040 containerd[1580]: time="2025-09-12T23:05:54.519970351Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.715882205s" Sep 12 23:05:54.520040 containerd[1580]: time="2025-09-12T23:05:54.520007221Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 23:05:57.134532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:05:57.134704 systemd[1]: kubelet.service: Consumed 389ms CPU time, 109.3M memory peak. Sep 12 23:05:57.137118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:05:57.168326 systemd[1]: Reload requested from client PID 2307 ('systemctl') (unit session-9.scope)... Sep 12 23:05:57.168355 systemd[1]: Reloading... Sep 12 23:05:57.260897 zram_generator::config[2352]: No configuration found. Sep 12 23:05:57.653467 systemd[1]: Reloading finished in 484 ms. Sep 12 23:05:57.741151 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:05:57.741274 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:05:57.741699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:05:57.741771 systemd[1]: kubelet.service: Consumed 181ms CPU time, 98.3M memory peak. Sep 12 23:05:57.743821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:05:58.010453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:05:58.012299 update_engine[1516]: I20250912 23:05:58.011894 1516 update_attempter.cc:509] Updating boot flags... Sep 12 23:05:58.017601 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:05:58.065170 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:05:58.065170 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:05:58.065170 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:05:58.065641 kubelet[2397]: I0912 23:05:58.065219 2397 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:05:59.073533 kubelet[2397]: I0912 23:05:59.073447 2397 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:05:59.073533 kubelet[2397]: I0912 23:05:59.073496 2397 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:05:59.074034 kubelet[2397]: I0912 23:05:59.073757 2397 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:05:59.103130 kubelet[2397]: E0912 23:05:59.103070 2397 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:05:59.149434 kubelet[2397]: I0912 23:05:59.149356 2397 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:05:59.158645 kubelet[2397]: I0912 23:05:59.158589 2397 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:05:59.166525 kubelet[2397]: I0912 23:05:59.166483 2397 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:05:59.167476 kubelet[2397]: I0912 23:05:59.167438 2397 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:05:59.167736 kubelet[2397]: I0912 23:05:59.167664 2397 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:05:59.168001 kubelet[2397]: I0912 23:05:59.167712 2397 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:05:59.168175 kubelet[2397]: I0912 23:05:59.168026 2397 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:05:59.168175 kubelet[2397]: I0912 23:05:59.168040 2397 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:05:59.168267 kubelet[2397]: I0912 23:05:59.168241 2397 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:05:59.171563 kubelet[2397]: I0912 23:05:59.171525 2397 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:05:59.171563 kubelet[2397]: I0912 23:05:59.171562 2397 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:05:59.171659 kubelet[2397]: I0912 23:05:59.171624 2397 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:05:59.171659 kubelet[2397]: I0912 23:05:59.171658 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:05:59.175802 kubelet[2397]: I0912 23:05:59.175766 2397 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 23:05:59.176369 kubelet[2397]: I0912 23:05:59.176340 2397 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:05:59.176423 kubelet[2397]: W0912 23:05:59.176349 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:05:59.176446 kubelet[2397]: E0912 23:05:59.176422 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:05:59.176608 kubelet[2397]: W0912 23:05:59.176557 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:05:59.176677 kubelet[2397]: E0912 23:05:59.176610 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:05:59.177485 kubelet[2397]: W0912 23:05:59.177448 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:05:59.180191 kubelet[2397]: I0912 23:05:59.180161 2397 server.go:1274] "Started kubelet" Sep 12 23:05:59.180881 kubelet[2397]: I0912 23:05:59.180816 2397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:05:59.181744 kubelet[2397]: I0912 23:05:59.181483 2397 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:05:59.181744 kubelet[2397]: I0912 23:05:59.181574 2397 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:05:59.182727 kubelet[2397]: I0912 23:05:59.182694 2397 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:05:59.182997 kubelet[2397]: I0912 23:05:59.182956 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:05:59.185834 kubelet[2397]: I0912 23:05:59.185415 2397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:05:59.187831 kubelet[2397]: I0912 23:05:59.187811 2397 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:05:59.187997 kubelet[2397]: I0912 23:05:59.187982 2397 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:05:59.188054 kubelet[2397]: I0912 23:05:59.188042 2397 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:05:59.189556 kubelet[2397]: W0912 23:05:59.188399 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:05:59.189556 kubelet[2397]: E0912 23:05:59.188451 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:05:59.189556 kubelet[2397]: E0912 23:05:59.189342 2397 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:05:59.189987 kubelet[2397]: E0912 23:05:59.189953 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:05:59.190157 kubelet[2397]: E0912 23:05:59.190040 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="200ms" Sep 12 23:05:59.190200 kubelet[2397]: I0912 23:05:59.190186 2397 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:05:59.190200 kubelet[2397]: I0912 23:05:59.190196 2397 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:05:59.190261 kubelet[2397]: I0912 23:05:59.190253 2397 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:05:59.251199 kubelet[2397]: E0912 23:05:59.248611 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ab88ee1d1c1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:05:59.180123163 +0000 UTC m=+1.156191539,LastTimestamp:2025-09-12 23:05:59.180123163 +0000 UTC m=+1.156191539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:05:59.263633 kubelet[2397]: I0912 23:05:59.263549 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:05:59.264926 kubelet[2397]: I0912 23:05:59.264893 2397 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:05:59.264926 kubelet[2397]: I0912 23:05:59.264910 2397 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:05:59.264926 kubelet[2397]: I0912 23:05:59.264933 2397 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:05:59.265172 kubelet[2397]: I0912 23:05:59.265010 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:05:59.265172 kubelet[2397]: I0912 23:05:59.265041 2397 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:05:59.265172 kubelet[2397]: I0912 23:05:59.265071 2397 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:05:59.265172 kubelet[2397]: E0912 23:05:59.265111 2397 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:05:59.290329 kubelet[2397]: E0912 23:05:59.290276 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:05:59.291291 kubelet[2397]: W0912 23:05:59.291216 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:05:59.291352 kubelet[2397]: E0912 23:05:59.291297 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:05:59.365994 kubelet[2397]: E0912 23:05:59.365777 2397 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:05:59.390486 kubelet[2397]: E0912 23:05:59.390408 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:05:59.390840 kubelet[2397]: E0912 23:05:59.390767 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="400ms" Sep 12 23:05:59.491215 kubelet[2397]: E0912 23:05:59.491144 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:05:59.566484 kubelet[2397]: E0912 23:05:59.566418 2397 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:05:59.591338 kubelet[2397]: E0912 23:05:59.591284 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:05:59.663343 kubelet[2397]: I0912 23:05:59.663127 2397 policy_none.go:49] "None policy: Start" Sep 12 23:05:59.664826 kubelet[2397]: I0912 23:05:59.664780 2397 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:05:59.665465 kubelet[2397]: I0912 23:05:59.665428 2397 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:05:59.676943 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:05:59.691700 kubelet[2397]: E0912 23:05:59.691658 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:05:59.693101 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:05:59.697909 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:05:59.712667 kubelet[2397]: I0912 23:05:59.712615 2397 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:05:59.713107 kubelet[2397]: I0912 23:05:59.713031 2397 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:05:59.713896 kubelet[2397]: I0912 23:05:59.713071 2397 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:05:59.713896 kubelet[2397]: I0912 23:05:59.713825 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:05:59.716505 kubelet[2397]: E0912 23:05:59.716457 2397 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:05:59.792585 kubelet[2397]: E0912 23:05:59.792510 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="800ms" Sep 12 23:05:59.816030 kubelet[2397]: I0912 23:05:59.815955 2397 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:05:59.816505 kubelet[2397]: E0912 23:05:59.816444 2397 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Sep 12 23:05:59.978904 systemd[1]: Created slice kubepods-burstable-podcad4274a07678cc181b7a6e957dd4e31.slice - libcontainer container kubepods-burstable-podcad4274a07678cc181b7a6e957dd4e31.slice. Sep 12 23:05:59.991696 kubelet[2397]: I0912 23:05:59.991572 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cad4274a07678cc181b7a6e957dd4e31-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cad4274a07678cc181b7a6e957dd4e31\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:05:59.991696 kubelet[2397]: I0912 23:05:59.991634 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cad4274a07678cc181b7a6e957dd4e31-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cad4274a07678cc181b7a6e957dd4e31\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:05:59.991696 kubelet[2397]: I0912 23:05:59.991658 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cad4274a07678cc181b7a6e957dd4e31-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cad4274a07678cc181b7a6e957dd4e31\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:05:59.991696 kubelet[2397]: I0912 23:05:59.991673 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:05:59.992068 kubelet[2397]: I0912 23:05:59.991715 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:05:59.992068 kubelet[2397]: I0912 23:05:59.991766 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:05:59.992068 kubelet[2397]: I0912 23:05:59.991792 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:05:59.992068 kubelet[2397]: I0912 23:05:59.991812 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:05:59.992068 kubelet[2397]: I0912 23:05:59.991831 2397 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:06:00.001225 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 23:06:00.011259 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 23:06:00.018045 kubelet[2397]: I0912 23:06:00.017985 2397 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:06:00.018442 kubelet[2397]: E0912 23:06:00.018399 2397 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Sep 12 23:06:00.298956 kubelet[2397]: E0912 23:06:00.298744 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:00.299636 containerd[1580]: time="2025-09-12T23:06:00.299594701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cad4274a07678cc181b7a6e957dd4e31,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:00.305415 kubelet[2397]: E0912 23:06:00.305384 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:00.306124 containerd[1580]: time="2025-09-12T23:06:00.306071334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:00.314371 kubelet[2397]: E0912 23:06:00.314342 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:00.314968 containerd[1580]: time="2025-09-12T23:06:00.314901941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:00.384880 containerd[1580]: time="2025-09-12T23:06:00.384196334Z" level=info msg="connecting to shim 76b366c7795325dec781d9f5e324bd8a59bb459276f50f4be802cf134c9460b6" address="unix:///run/containerd/s/a8d325e15f4cbb2ab45f4936e9fc91fedfcaa04d2eb54121efef42c1f94d45d4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:00.384880 containerd[1580]: time="2025-09-12T23:06:00.384321992Z" level=info msg="connecting to shim 068d62248736be128f0daa453f60aa4f6c867d23e20ef47cd9868220e06cf850" address="unix:///run/containerd/s/5c7ddd7c1fa677ed199c7e9896e3a6bbec9d60913ff8f3614361d05ea6e3e882" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:00.420576 kubelet[2397]: I0912 23:06:00.420543 2397 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:06:00.420927 kubelet[2397]: E0912 23:06:00.420905 2397 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Sep 12 23:06:00.421271 containerd[1580]: time="2025-09-12T23:06:00.421228398Z" level=info msg="connecting to shim 8a67e454bb44d382a32445116a454651bff47449ed1db5cf11a3741a27f16566" address="unix:///run/containerd/s/ba7c13b6063ee5487f1fc4d3c48343b92cf65a93c130b3472ac64a4aa5e449bf" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:00.482054 systemd[1]: Started cri-containerd-068d62248736be128f0daa453f60aa4f6c867d23e20ef47cd9868220e06cf850.scope - libcontainer container 068d62248736be128f0daa453f60aa4f6c867d23e20ef47cd9868220e06cf850. Sep 12 23:06:00.484744 systemd[1]: Started cri-containerd-76b366c7795325dec781d9f5e324bd8a59bb459276f50f4be802cf134c9460b6.scope - libcontainer container 76b366c7795325dec781d9f5e324bd8a59bb459276f50f4be802cf134c9460b6. Sep 12 23:06:00.487749 kubelet[2397]: W0912 23:06:00.487685 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:06:00.487913 kubelet[2397]: E0912 23:06:00.487893 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:06:00.538661 kubelet[2397]: W0912 23:06:00.538567 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:06:00.538984 kubelet[2397]: E0912 23:06:00.538960 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:06:00.555222 systemd[1]: Started cri-containerd-8a67e454bb44d382a32445116a454651bff47449ed1db5cf11a3741a27f16566.scope - libcontainer container 8a67e454bb44d382a32445116a454651bff47449ed1db5cf11a3741a27f16566. Sep 12 23:06:00.593305 kubelet[2397]: E0912 23:06:00.593256 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="1.6s" Sep 12 23:06:00.655914 kubelet[2397]: W0912 23:06:00.651225 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:06:00.656470 kubelet[2397]: E0912 23:06:00.652172 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:06:00.712099 kubelet[2397]: W0912 23:06:00.712035 2397 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Sep 12 23:06:00.712224 kubelet[2397]: E0912 23:06:00.712105 2397 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:06:00.773412 containerd[1580]: time="2025-09-12T23:06:00.773342222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cad4274a07678cc181b7a6e957dd4e31,Namespace:kube-system,Attempt:0,} returns sandbox id \"76b366c7795325dec781d9f5e324bd8a59bb459276f50f4be802cf134c9460b6\"" Sep 12 23:06:00.775237 kubelet[2397]: E0912 23:06:00.775189 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:00.777602 containerd[1580]: time="2025-09-12T23:06:00.777567806Z" level=info msg="CreateContainer within sandbox \"76b366c7795325dec781d9f5e324bd8a59bb459276f50f4be802cf134c9460b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:06:00.779699 containerd[1580]: time="2025-09-12T23:06:00.779653933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"068d62248736be128f0daa453f60aa4f6c867d23e20ef47cd9868220e06cf850\"" Sep 12 23:06:00.780255 kubelet[2397]: E0912 23:06:00.780220 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:00.781812 containerd[1580]: time="2025-09-12T23:06:00.781769023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a67e454bb44d382a32445116a454651bff47449ed1db5cf11a3741a27f16566\"" Sep 12 23:06:00.782697 containerd[1580]: time="2025-09-12T23:06:00.782656777Z" level=info msg="CreateContainer within sandbox \"068d62248736be128f0daa453f60aa4f6c867d23e20ef47cd9868220e06cf850\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:06:00.791586 containerd[1580]: time="2025-09-12T23:06:00.791521940Z" level=info msg="Container b0353aa6d17f3fa05daf9fbf852c40ab226ff1263e8210d5bcc53da029817b6c: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:00.794703 kubelet[2397]: E0912 23:06:00.794677 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:00.796999 containerd[1580]: time="2025-09-12T23:06:00.796957500Z" level=info msg="CreateContainer within sandbox \"8a67e454bb44d382a32445116a454651bff47449ed1db5cf11a3741a27f16566\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:06:00.808730 containerd[1580]: time="2025-09-12T23:06:00.808597786Z" level=info msg="CreateContainer within sandbox \"76b366c7795325dec781d9f5e324bd8a59bb459276f50f4be802cf134c9460b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0353aa6d17f3fa05daf9fbf852c40ab226ff1263e8210d5bcc53da029817b6c\"" Sep 12 23:06:00.809654 containerd[1580]: time="2025-09-12T23:06:00.809604094Z" level=info msg="StartContainer for \"b0353aa6d17f3fa05daf9fbf852c40ab226ff1263e8210d5bcc53da029817b6c\"" Sep 12 23:06:00.809749 containerd[1580]: time="2025-09-12T23:06:00.809626547Z" level=info msg="Container 64612be51635ad5bebdcc5b1ec26cc809f299babf51f81271a9af73a118eb1d4: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:00.810954 containerd[1580]: time="2025-09-12T23:06:00.810924648Z" level=info msg="connecting to shim b0353aa6d17f3fa05daf9fbf852c40ab226ff1263e8210d5bcc53da029817b6c" address="unix:///run/containerd/s/a8d325e15f4cbb2ab45f4936e9fc91fedfcaa04d2eb54121efef42c1f94d45d4" protocol=ttrpc version=3 Sep 12 23:06:00.816584 containerd[1580]: time="2025-09-12T23:06:00.816548283Z" level=info msg="Container 8a3167bffb50da191073db5e5d787fc622af9563a5f4af7f29555507bf50bfb8: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:00.840028 systemd[1]: Started cri-containerd-b0353aa6d17f3fa05daf9fbf852c40ab226ff1263e8210d5bcc53da029817b6c.scope - libcontainer container b0353aa6d17f3fa05daf9fbf852c40ab226ff1263e8210d5bcc53da029817b6c. Sep 12 23:06:01.137657 containerd[1580]: time="2025-09-12T23:06:01.137231904Z" level=info msg="StartContainer for \"b0353aa6d17f3fa05daf9fbf852c40ab226ff1263e8210d5bcc53da029817b6c\" returns successfully" Sep 12 23:06:01.156561 containerd[1580]: time="2025-09-12T23:06:01.156486156Z" level=info msg="CreateContainer within sandbox \"068d62248736be128f0daa453f60aa4f6c867d23e20ef47cd9868220e06cf850\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"64612be51635ad5bebdcc5b1ec26cc809f299babf51f81271a9af73a118eb1d4\"" Sep 12 23:06:01.159184 containerd[1580]: time="2025-09-12T23:06:01.157375712Z" level=info msg="StartContainer for \"64612be51635ad5bebdcc5b1ec26cc809f299babf51f81271a9af73a118eb1d4\"" Sep 12 23:06:01.159184 containerd[1580]: time="2025-09-12T23:06:01.158121696Z" level=info msg="CreateContainer within sandbox \"8a67e454bb44d382a32445116a454651bff47449ed1db5cf11a3741a27f16566\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8a3167bffb50da191073db5e5d787fc622af9563a5f4af7f29555507bf50bfb8\"" Sep 12 23:06:01.159184 containerd[1580]: time="2025-09-12T23:06:01.158704130Z" level=info msg="connecting to shim 64612be51635ad5bebdcc5b1ec26cc809f299babf51f81271a9af73a118eb1d4" address="unix:///run/containerd/s/5c7ddd7c1fa677ed199c7e9896e3a6bbec9d60913ff8f3614361d05ea6e3e882" protocol=ttrpc version=3 Sep 12 23:06:01.159381 containerd[1580]: time="2025-09-12T23:06:01.159330467Z" level=info msg="StartContainer for \"8a3167bffb50da191073db5e5d787fc622af9563a5f4af7f29555507bf50bfb8\"" Sep 12 23:06:01.161874 containerd[1580]: time="2025-09-12T23:06:01.161140337Z" level=info msg="connecting to shim 8a3167bffb50da191073db5e5d787fc622af9563a5f4af7f29555507bf50bfb8" address="unix:///run/containerd/s/ba7c13b6063ee5487f1fc4d3c48343b92cf65a93c130b3472ac64a4aa5e449bf" protocol=ttrpc version=3 Sep 12 23:06:01.193127 systemd[1]: Started cri-containerd-8a3167bffb50da191073db5e5d787fc622af9563a5f4af7f29555507bf50bfb8.scope - libcontainer container 8a3167bffb50da191073db5e5d787fc622af9563a5f4af7f29555507bf50bfb8. Sep 12 23:06:01.202222 systemd[1]: Started cri-containerd-64612be51635ad5bebdcc5b1ec26cc809f299babf51f81271a9af73a118eb1d4.scope - libcontainer container 64612be51635ad5bebdcc5b1ec26cc809f299babf51f81271a9af73a118eb1d4. Sep 12 23:06:01.278011 kubelet[2397]: I0912 23:06:01.277957 2397 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:06:01.295506 kubelet[2397]: E0912 23:06:01.295461 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:01.345719 containerd[1580]: time="2025-09-12T23:06:01.345660399Z" level=info msg="StartContainer for \"8a3167bffb50da191073db5e5d787fc622af9563a5f4af7f29555507bf50bfb8\" returns successfully" Sep 12 23:06:01.399375 containerd[1580]: time="2025-09-12T23:06:01.399141159Z" level=info msg="StartContainer for \"64612be51635ad5bebdcc5b1ec26cc809f299babf51f81271a9af73a118eb1d4\" returns successfully" Sep 12 23:06:02.331969 kubelet[2397]: E0912 23:06:02.331911 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:02.334517 kubelet[2397]: E0912 23:06:02.334486 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:02.334732 kubelet[2397]: E0912 23:06:02.334669 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:02.890191 kubelet[2397]: E0912 23:06:02.890138 2397 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 23:06:03.002639 kubelet[2397]: I0912 23:06:03.002580 2397 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 23:06:03.003142 kubelet[2397]: E0912 23:06:03.003065 2397 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 23:06:03.015882 kubelet[2397]: E0912 23:06:03.015798 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:06:03.115943 kubelet[2397]: E0912 23:06:03.115894 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:06:03.216969 kubelet[2397]: E0912 23:06:03.216788 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:06:03.317975 kubelet[2397]: E0912 23:06:03.317907 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:06:03.337031 kubelet[2397]: E0912 23:06:03.336982 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:03.337485 kubelet[2397]: E0912 23:06:03.337119 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:03.418460 kubelet[2397]: E0912 23:06:03.418374 2397 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:06:04.174942 kubelet[2397]: I0912 23:06:04.174905 2397 apiserver.go:52] "Watching apiserver" Sep 12 23:06:04.188540 kubelet[2397]: I0912 23:06:04.188442 2397 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:06:04.765519 kubelet[2397]: E0912 23:06:04.765476 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:05.050021 systemd[1]: Reload requested from client PID 2689 ('systemctl') (unit session-9.scope)... Sep 12 23:06:05.050040 systemd[1]: Reloading... Sep 12 23:06:05.142885 zram_generator::config[2735]: No configuration found. Sep 12 23:06:05.339864 kubelet[2397]: E0912 23:06:05.339705 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:05.411576 systemd[1]: Reloading finished in 361 ms. Sep 12 23:06:05.439268 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:05.454596 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:06:05.454966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:05.455037 systemd[1]: kubelet.service: Consumed 1.824s CPU time, 131.4M memory peak. Sep 12 23:06:05.457963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:06:05.714392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:06:05.726379 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:06:05.789616 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:06:05.789616 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:06:05.789616 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:06:05.789616 kubelet[2777]: I0912 23:06:05.789480 2777 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:06:05.797222 kubelet[2777]: I0912 23:06:05.797182 2777 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:06:05.797222 kubelet[2777]: I0912 23:06:05.797210 2777 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:06:05.797455 kubelet[2777]: I0912 23:06:05.797431 2777 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:06:05.798794 kubelet[2777]: I0912 23:06:05.798764 2777 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:06:05.801025 kubelet[2777]: I0912 23:06:05.800973 2777 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:06:05.805368 kubelet[2777]: I0912 23:06:05.805345 2777 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:06:05.813061 kubelet[2777]: I0912 23:06:05.812984 2777 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:06:05.813209 kubelet[2777]: I0912 23:06:05.813186 2777 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:06:05.813424 kubelet[2777]: I0912 23:06:05.813364 2777 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:06:05.813624 kubelet[2777]: I0912 23:06:05.813409 2777 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:06:05.813748 kubelet[2777]: I0912 23:06:05.813627 2777 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:06:05.813748 kubelet[2777]: I0912 23:06:05.813640 2777 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:06:05.813748 kubelet[2777]: I0912 23:06:05.813681 2777 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:06:05.813893 kubelet[2777]: I0912 23:06:05.813834 2777 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:06:05.813893 kubelet[2777]: I0912 23:06:05.813867 2777 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:06:05.813967 kubelet[2777]: I0912 23:06:05.813911 2777 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:06:05.813967 kubelet[2777]: I0912 23:06:05.813927 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:06:05.816882 kubelet[2777]: I0912 23:06:05.815247 2777 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 23:06:05.816882 kubelet[2777]: I0912 23:06:05.815816 2777 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:06:05.816882 kubelet[2777]: I0912 23:06:05.816414 2777 server.go:1274] "Started kubelet" Sep 12 23:06:05.817285 kubelet[2777]: I0912 23:06:05.817216 2777 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:06:05.818283 kubelet[2777]: I0912 23:06:05.818263 2777 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:06:05.820498 kubelet[2777]: I0912 23:06:05.820430 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:06:05.820768 kubelet[2777]: I0912 23:06:05.820747 2777 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:06:05.821283 kubelet[2777]: I0912 23:06:05.821247 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:06:05.827057 kubelet[2777]: I0912 23:06:05.827007 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:06:05.830543 kubelet[2777]: I0912 23:06:05.829795 2777 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:06:05.830543 kubelet[2777]: E0912 23:06:05.830092 2777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:06:05.831828 kubelet[2777]: I0912 23:06:05.831702 2777 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:06:05.831953 kubelet[2777]: I0912 23:06:05.831927 2777 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:06:05.834575 kubelet[2777]: I0912 23:06:05.834378 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:06:05.835741 kubelet[2777]: I0912 23:06:05.835708 2777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:06:05.835741 kubelet[2777]: I0912 23:06:05.835742 2777 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:06:05.835826 kubelet[2777]: I0912 23:06:05.835762 2777 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:06:05.835826 kubelet[2777]: E0912 23:06:05.835813 2777 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:06:05.836102 kubelet[2777]: E0912 23:06:05.836048 2777 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:06:05.837303 kubelet[2777]: I0912 23:06:05.837279 2777 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:06:05.837303 kubelet[2777]: I0912 23:06:05.837297 2777 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:06:05.837394 kubelet[2777]: I0912 23:06:05.837381 2777 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:06:05.871293 kubelet[2777]: I0912 23:06:05.871257 2777 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:06:05.871293 kubelet[2777]: I0912 23:06:05.871275 2777 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:06:05.871293 kubelet[2777]: I0912 23:06:05.871293 2777 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:06:05.871526 kubelet[2777]: I0912 23:06:05.871439 2777 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:06:05.871526 kubelet[2777]: I0912 23:06:05.871449 2777 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:06:05.871526 kubelet[2777]: I0912 23:06:05.871472 2777 policy_none.go:49] "None policy: Start" Sep 12 23:06:05.872367 kubelet[2777]: I0912 23:06:05.872335 2777 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:06:05.872415 kubelet[2777]: I0912 23:06:05.872386 2777 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:06:05.872687 kubelet[2777]: I0912 23:06:05.872655 2777 state_mem.go:75] "Updated machine memory state" Sep 12 23:06:05.878551 kubelet[2777]: I0912 23:06:05.878497 2777 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:06:05.878816 kubelet[2777]: I0912 23:06:05.878746 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:06:05.878816 kubelet[2777]: I0912 23:06:05.878759 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:06:05.879107 kubelet[2777]: I0912 23:06:05.879076 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:06:05.943821 kubelet[2777]: E0912 23:06:05.943750 2777 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 23:06:05.987368 kubelet[2777]: I0912 23:06:05.987326 2777 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:06:05.994588 kubelet[2777]: I0912 23:06:05.994544 2777 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 23:06:05.994786 kubelet[2777]: I0912 23:06:05.994647 2777 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 23:06:06.032322 kubelet[2777]: I0912 23:06:06.032255 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cad4274a07678cc181b7a6e957dd4e31-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cad4274a07678cc181b7a6e957dd4e31\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:06:06.032322 kubelet[2777]: I0912 23:06:06.032304 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:06:06.032526 kubelet[2777]: I0912 23:06:06.032342 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:06:06.032526 kubelet[2777]: I0912 23:06:06.032421 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:06:06.032526 kubelet[2777]: I0912 23:06:06.032471 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:06:06.032526 kubelet[2777]: I0912 23:06:06.032493 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cad4274a07678cc181b7a6e957dd4e31-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cad4274a07678cc181b7a6e957dd4e31\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:06:06.032526 kubelet[2777]: I0912 23:06:06.032507 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cad4274a07678cc181b7a6e957dd4e31-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cad4274a07678cc181b7a6e957dd4e31\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:06:06.032693 kubelet[2777]: I0912 23:06:06.032520 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:06:06.032693 kubelet[2777]: I0912 23:06:06.032534 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:06:06.244490 kubelet[2777]: E0912 23:06:06.243926 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:06.244490 kubelet[2777]: E0912 23:06:06.243957 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:06.244490 kubelet[2777]: E0912 23:06:06.244052 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:06.815683 kubelet[2777]: I0912 23:06:06.815444 2777 apiserver.go:52] "Watching apiserver" Sep 12 23:06:06.832843 kubelet[2777]: I0912 23:06:06.832808 2777 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:06:06.859846 kubelet[2777]: E0912 23:06:06.859803 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:06.860544 kubelet[2777]: E0912 23:06:06.860507 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:06.861177 kubelet[2777]: E0912 23:06:06.861145 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:06.938078 kubelet[2777]: I0912 23:06:06.937573 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9375449599999999 podStartE2EDuration="1.93754496s" podCreationTimestamp="2025-09-12 23:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:06:06.937271985 +0000 UTC m=+1.205302448" watchObservedRunningTime="2025-09-12 23:06:06.93754496 +0000 UTC m=+1.205575414" Sep 12 23:06:06.950064 kubelet[2777]: I0912 23:06:06.949989 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.949962256 podStartE2EDuration="1.949962256s" podCreationTimestamp="2025-09-12 23:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:06:06.949614128 +0000 UTC m=+1.217644591" watchObservedRunningTime="2025-09-12 23:06:06.949962256 +0000 UTC m=+1.217992709" Sep 12 23:06:06.962145 kubelet[2777]: I0912 23:06:06.962063 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.962030092 podStartE2EDuration="2.962030092s" podCreationTimestamp="2025-09-12 23:06:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:06:06.962015664 +0000 UTC m=+1.230046147" watchObservedRunningTime="2025-09-12 23:06:06.962030092 +0000 UTC m=+1.230060545" Sep 12 23:06:07.875166 kubelet[2777]: E0912 23:06:07.875107 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:09.420774 kubelet[2777]: E0912 23:06:09.420507 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:10.961159 kubelet[2777]: I0912 23:06:10.961111 2777 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:06:10.961813 kubelet[2777]: I0912 23:06:10.961789 2777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:06:10.961894 containerd[1580]: time="2025-09-12T23:06:10.961583700Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:06:11.814323 kubelet[2777]: E0912 23:06:11.814265 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:11.881400 kubelet[2777]: E0912 23:06:11.881345 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:11.918211 systemd[1]: Created slice kubepods-besteffort-podba2674a1_29a1_4c70_994d_d30d323cfc17.slice - libcontainer container kubepods-besteffort-podba2674a1_29a1_4c70_994d_d30d323cfc17.slice. Sep 12 23:06:11.981746 kubelet[2777]: I0912 23:06:11.979840 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba2674a1-29a1-4c70-994d-d30d323cfc17-lib-modules\") pod \"kube-proxy-tggvr\" (UID: \"ba2674a1-29a1-4c70-994d-d30d323cfc17\") " pod="kube-system/kube-proxy-tggvr" Sep 12 23:06:11.981746 kubelet[2777]: I0912 23:06:11.979927 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba2674a1-29a1-4c70-994d-d30d323cfc17-kube-proxy\") pod \"kube-proxy-tggvr\" (UID: \"ba2674a1-29a1-4c70-994d-d30d323cfc17\") " pod="kube-system/kube-proxy-tggvr" Sep 12 23:06:11.988765 kubelet[2777]: I0912 23:06:11.979955 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba2674a1-29a1-4c70-994d-d30d323cfc17-xtables-lock\") pod \"kube-proxy-tggvr\" (UID: \"ba2674a1-29a1-4c70-994d-d30d323cfc17\") " pod="kube-system/kube-proxy-tggvr" Sep 12 23:06:11.988765 kubelet[2777]: I0912 23:06:11.983251 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f44fh\" (UniqueName: \"kubernetes.io/projected/ba2674a1-29a1-4c70-994d-d30d323cfc17-kube-api-access-f44fh\") pod \"kube-proxy-tggvr\" (UID: \"ba2674a1-29a1-4c70-994d-d30d323cfc17\") " pod="kube-system/kube-proxy-tggvr" Sep 12 23:06:12.196654 systemd[1]: Created slice kubepods-besteffort-pod642f13dc_1f69_4ae4_a38e_519447ef1758.slice - libcontainer container kubepods-besteffort-pod642f13dc_1f69_4ae4_a38e_519447ef1758.slice. Sep 12 23:06:12.253966 kubelet[2777]: E0912 23:06:12.253887 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:12.259399 containerd[1580]: time="2025-09-12T23:06:12.259003782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tggvr,Uid:ba2674a1-29a1-4c70-994d-d30d323cfc17,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:12.286988 kubelet[2777]: I0912 23:06:12.286909 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/642f13dc-1f69-4ae4-a38e-519447ef1758-var-lib-calico\") pod \"tigera-operator-58fc44c59b-jvqzv\" (UID: \"642f13dc-1f69-4ae4-a38e-519447ef1758\") " pod="tigera-operator/tigera-operator-58fc44c59b-jvqzv" Sep 12 23:06:12.286988 kubelet[2777]: I0912 23:06:12.286967 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pff5r\" (UniqueName: \"kubernetes.io/projected/642f13dc-1f69-4ae4-a38e-519447ef1758-kube-api-access-pff5r\") pod \"tigera-operator-58fc44c59b-jvqzv\" (UID: \"642f13dc-1f69-4ae4-a38e-519447ef1758\") " pod="tigera-operator/tigera-operator-58fc44c59b-jvqzv" Sep 12 23:06:12.363384 containerd[1580]: time="2025-09-12T23:06:12.363326505Z" level=info msg="connecting to shim e6fbda5edaa7cfd8d439c986ee5fd1e0024ed138fbf8c9444ff9bd1d01733015" address="unix:///run/containerd/s/8fecffff35802d1539a37daf37ba8b87b594b7e19340602dc9c2c4f8ae00f35c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:12.453964 systemd[1]: Started cri-containerd-e6fbda5edaa7cfd8d439c986ee5fd1e0024ed138fbf8c9444ff9bd1d01733015.scope - libcontainer container e6fbda5edaa7cfd8d439c986ee5fd1e0024ed138fbf8c9444ff9bd1d01733015. Sep 12 23:06:12.507526 containerd[1580]: time="2025-09-12T23:06:12.507470547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-jvqzv,Uid:642f13dc-1f69-4ae4-a38e-519447ef1758,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:06:12.573027 containerd[1580]: time="2025-09-12T23:06:12.572947299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tggvr,Uid:ba2674a1-29a1-4c70-994d-d30d323cfc17,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6fbda5edaa7cfd8d439c986ee5fd1e0024ed138fbf8c9444ff9bd1d01733015\"" Sep 12 23:06:12.583525 kubelet[2777]: E0912 23:06:12.582097 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:12.607815 containerd[1580]: time="2025-09-12T23:06:12.601042453Z" level=info msg="CreateContainer within sandbox \"e6fbda5edaa7cfd8d439c986ee5fd1e0024ed138fbf8c9444ff9bd1d01733015\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:06:12.667067 containerd[1580]: time="2025-09-12T23:06:12.666948725Z" level=info msg="connecting to shim a07c68acb23ddd827ae7a601d83f7321a38984d83556373fe33c12749881f267" address="unix:///run/containerd/s/7c24bbb4ef26e0537752a4f517001f487a91422e60efb94511ddc1213dc9ce22" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:12.681031 containerd[1580]: time="2025-09-12T23:06:12.680971290Z" level=info msg="Container 4af9745e47c243f96ca18de9bc8ca59d20dcc52281fb80d32166c67d99294abc: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:12.716337 containerd[1580]: time="2025-09-12T23:06:12.716187829Z" level=info msg="CreateContainer within sandbox \"e6fbda5edaa7cfd8d439c986ee5fd1e0024ed138fbf8c9444ff9bd1d01733015\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4af9745e47c243f96ca18de9bc8ca59d20dcc52281fb80d32166c67d99294abc\"" Sep 12 23:06:12.724244 containerd[1580]: time="2025-09-12T23:06:12.724126644Z" level=info msg="StartContainer for \"4af9745e47c243f96ca18de9bc8ca59d20dcc52281fb80d32166c67d99294abc\"" Sep 12 23:06:12.741638 containerd[1580]: time="2025-09-12T23:06:12.741509242Z" level=info msg="connecting to shim 4af9745e47c243f96ca18de9bc8ca59d20dcc52281fb80d32166c67d99294abc" address="unix:///run/containerd/s/8fecffff35802d1539a37daf37ba8b87b594b7e19340602dc9c2c4f8ae00f35c" protocol=ttrpc version=3 Sep 12 23:06:12.794131 systemd[1]: Started cri-containerd-a07c68acb23ddd827ae7a601d83f7321a38984d83556373fe33c12749881f267.scope - libcontainer container a07c68acb23ddd827ae7a601d83f7321a38984d83556373fe33c12749881f267. Sep 12 23:06:12.801980 systemd[1]: Started cri-containerd-4af9745e47c243f96ca18de9bc8ca59d20dcc52281fb80d32166c67d99294abc.scope - libcontainer container 4af9745e47c243f96ca18de9bc8ca59d20dcc52281fb80d32166c67d99294abc. Sep 12 23:06:12.929584 containerd[1580]: time="2025-09-12T23:06:12.929245010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-jvqzv,Uid:642f13dc-1f69-4ae4-a38e-519447ef1758,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a07c68acb23ddd827ae7a601d83f7321a38984d83556373fe33c12749881f267\"" Sep 12 23:06:12.930751 containerd[1580]: time="2025-09-12T23:06:12.930480048Z" level=info msg="StartContainer for \"4af9745e47c243f96ca18de9bc8ca59d20dcc52281fb80d32166c67d99294abc\" returns successfully" Sep 12 23:06:12.935407 containerd[1580]: time="2025-09-12T23:06:12.935000901Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:06:13.913841 kubelet[2777]: E0912 23:06:13.912758 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:14.833282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749252419.mount: Deactivated successfully. Sep 12 23:06:14.918875 kubelet[2777]: E0912 23:06:14.918435 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:15.488287 containerd[1580]: time="2025-09-12T23:06:15.488213367Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:15.489527 containerd[1580]: time="2025-09-12T23:06:15.489488910Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 23:06:15.490950 containerd[1580]: time="2025-09-12T23:06:15.490914585Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:15.494027 containerd[1580]: time="2025-09-12T23:06:15.493992312Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:15.494650 containerd[1580]: time="2025-09-12T23:06:15.494599736Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.559550124s" Sep 12 23:06:15.494688 containerd[1580]: time="2025-09-12T23:06:15.494647987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 23:06:15.496927 containerd[1580]: time="2025-09-12T23:06:15.496844573Z" level=info msg="CreateContainer within sandbox \"a07c68acb23ddd827ae7a601d83f7321a38984d83556373fe33c12749881f267\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:06:15.510107 containerd[1580]: time="2025-09-12T23:06:15.510037111Z" level=info msg="Container 2a2506197114b9efdf4614a7d6ff0776aa8174eb98afd8d80dbc8d930a0fd400: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:15.519706 containerd[1580]: time="2025-09-12T23:06:15.519642521Z" level=info msg="CreateContainer within sandbox \"a07c68acb23ddd827ae7a601d83f7321a38984d83556373fe33c12749881f267\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2a2506197114b9efdf4614a7d6ff0776aa8174eb98afd8d80dbc8d930a0fd400\"" Sep 12 23:06:15.520277 containerd[1580]: time="2025-09-12T23:06:15.520257099Z" level=info msg="StartContainer for \"2a2506197114b9efdf4614a7d6ff0776aa8174eb98afd8d80dbc8d930a0fd400\"" Sep 12 23:06:15.521396 containerd[1580]: time="2025-09-12T23:06:15.521365647Z" level=info msg="connecting to shim 2a2506197114b9efdf4614a7d6ff0776aa8174eb98afd8d80dbc8d930a0fd400" address="unix:///run/containerd/s/7c24bbb4ef26e0537752a4f517001f487a91422e60efb94511ddc1213dc9ce22" protocol=ttrpc version=3 Sep 12 23:06:15.585257 systemd[1]: Started cri-containerd-2a2506197114b9efdf4614a7d6ff0776aa8174eb98afd8d80dbc8d930a0fd400.scope - libcontainer container 2a2506197114b9efdf4614a7d6ff0776aa8174eb98afd8d80dbc8d930a0fd400. Sep 12 23:06:15.726787 containerd[1580]: time="2025-09-12T23:06:15.726729152Z" level=info msg="StartContainer for \"2a2506197114b9efdf4614a7d6ff0776aa8174eb98afd8d80dbc8d930a0fd400\" returns successfully" Sep 12 23:06:15.935369 kubelet[2777]: I0912 23:06:15.935161 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tggvr" podStartSLOduration=4.935125659 podStartE2EDuration="4.935125659s" podCreationTimestamp="2025-09-12 23:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:06:13.956944292 +0000 UTC m=+8.224974745" watchObservedRunningTime="2025-09-12 23:06:15.935125659 +0000 UTC m=+10.203156112" Sep 12 23:06:15.935369 kubelet[2777]: I0912 23:06:15.935343 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-jvqzv" podStartSLOduration=2.373650218 podStartE2EDuration="4.935335555s" podCreationTimestamp="2025-09-12 23:06:11 +0000 UTC" firstStartedPulling="2025-09-12 23:06:12.933757206 +0000 UTC m=+7.201787669" lastFinishedPulling="2025-09-12 23:06:15.495442553 +0000 UTC m=+9.763473006" observedRunningTime="2025-09-12 23:06:15.934548864 +0000 UTC m=+10.202579317" watchObservedRunningTime="2025-09-12 23:06:15.935335555 +0000 UTC m=+10.203366018" Sep 12 23:06:16.102082 kubelet[2777]: E0912 23:06:16.102036 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:16.926124 kubelet[2777]: E0912 23:06:16.926079 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:17.927624 kubelet[2777]: E0912 23:06:17.927569 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:19.426598 kubelet[2777]: E0912 23:06:19.426543 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:22.226812 sudo[1802]: pam_unix(sudo:session): session closed for user root Sep 12 23:06:22.231197 sshd[1801]: Connection closed by 10.0.0.1 port 50088 Sep 12 23:06:22.232111 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Sep 12 23:06:22.238198 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:06:22.238720 systemd[1]: sshd@8-10.0.0.126:22-10.0.0.1:50088.service: Deactivated successfully. Sep 12 23:06:22.243152 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:06:22.243647 systemd[1]: session-9.scope: Consumed 6.578s CPU time, 225.5M memory peak. Sep 12 23:06:22.247326 systemd-logind[1515]: Removed session 9. Sep 12 23:06:24.783001 systemd[1]: Created slice kubepods-besteffort-pod5d841a48_bc98_4b46_bb75_a6c08aae1d2f.slice - libcontainer container kubepods-besteffort-pod5d841a48_bc98_4b46_bb75_a6c08aae1d2f.slice. Sep 12 23:06:24.833578 kubelet[2777]: I0912 23:06:24.833464 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d841a48-bc98-4b46-bb75-a6c08aae1d2f-tigera-ca-bundle\") pod \"calico-typha-75f758cdbf-j4hjd\" (UID: \"5d841a48-bc98-4b46-bb75-a6c08aae1d2f\") " pod="calico-system/calico-typha-75f758cdbf-j4hjd" Sep 12 23:06:24.834195 kubelet[2777]: I0912 23:06:24.834172 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5d841a48-bc98-4b46-bb75-a6c08aae1d2f-typha-certs\") pod \"calico-typha-75f758cdbf-j4hjd\" (UID: \"5d841a48-bc98-4b46-bb75-a6c08aae1d2f\") " pod="calico-system/calico-typha-75f758cdbf-j4hjd" Sep 12 23:06:24.834357 kubelet[2777]: I0912 23:06:24.834341 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mv4\" (UniqueName: \"kubernetes.io/projected/5d841a48-bc98-4b46-bb75-a6c08aae1d2f-kube-api-access-g6mv4\") pod \"calico-typha-75f758cdbf-j4hjd\" (UID: \"5d841a48-bc98-4b46-bb75-a6c08aae1d2f\") " pod="calico-system/calico-typha-75f758cdbf-j4hjd" Sep 12 23:06:25.246177 systemd[1]: Created slice kubepods-besteffort-pod0f294c62_5d81_4058_8de3_171454204acc.slice - libcontainer container kubepods-besteffort-pod0f294c62_5d81_4058_8de3_171454204acc.slice. Sep 12 23:06:25.338250 kubelet[2777]: I0912 23:06:25.338157 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrnh\" (UniqueName: \"kubernetes.io/projected/0f294c62-5d81-4058-8de3-171454204acc-kube-api-access-2lrnh\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338250 kubelet[2777]: I0912 23:06:25.338229 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-policysync\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338250 kubelet[2777]: I0912 23:06:25.338251 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-var-run-calico\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338250 kubelet[2777]: I0912 23:06:25.338269 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-var-lib-calico\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338585 kubelet[2777]: I0912 23:06:25.338287 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-cni-bin-dir\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338585 kubelet[2777]: I0912 23:06:25.338303 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-flexvol-driver-host\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338585 kubelet[2777]: I0912 23:06:25.338319 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-lib-modules\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338585 kubelet[2777]: I0912 23:06:25.338336 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-xtables-lock\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338585 kubelet[2777]: I0912 23:06:25.338351 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-cni-log-dir\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338754 kubelet[2777]: I0912 23:06:25.338373 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0f294c62-5d81-4058-8de3-171454204acc-node-certs\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338754 kubelet[2777]: I0912 23:06:25.338402 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0f294c62-5d81-4058-8de3-171454204acc-cni-net-dir\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.338754 kubelet[2777]: I0912 23:06:25.338422 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f294c62-5d81-4058-8de3-171454204acc-tigera-ca-bundle\") pod \"calico-node-hzlf2\" (UID: \"0f294c62-5d81-4058-8de3-171454204acc\") " pod="calico-system/calico-node-hzlf2" Sep 12 23:06:25.401064 kubelet[2777]: E0912 23:06:25.400561 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:25.402658 containerd[1580]: time="2025-09-12T23:06:25.401516557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f758cdbf-j4hjd,Uid:5d841a48-bc98-4b46-bb75-a6c08aae1d2f,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:25.448878 kubelet[2777]: E0912 23:06:25.448785 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.448878 kubelet[2777]: W0912 23:06:25.448820 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.449147 kubelet[2777]: E0912 23:06:25.449095 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.527503 kubelet[2777]: E0912 23:06:25.527320 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:25.541826 kubelet[2777]: E0912 23:06:25.541775 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.541826 kubelet[2777]: W0912 23:06:25.541827 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.542071 kubelet[2777]: E0912 23:06:25.541888 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.544978 kubelet[2777]: E0912 23:06:25.544919 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.544978 kubelet[2777]: W0912 23:06:25.544951 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.544978 kubelet[2777]: E0912 23:06:25.544977 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.551990 containerd[1580]: time="2025-09-12T23:06:25.551716742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzlf2,Uid:0f294c62-5d81-4058-8de3-171454204acc,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:25.614322 kubelet[2777]: E0912 23:06:25.614267 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.614322 kubelet[2777]: W0912 23:06:25.614304 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.614322 kubelet[2777]: E0912 23:06:25.614344 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.614682 kubelet[2777]: E0912 23:06:25.614662 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.614682 kubelet[2777]: W0912 23:06:25.614676 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.614682 kubelet[2777]: E0912 23:06:25.614687 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.614930 kubelet[2777]: E0912 23:06:25.614911 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.614930 kubelet[2777]: W0912 23:06:25.614925 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.614999 kubelet[2777]: E0912 23:06:25.614936 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.615240 kubelet[2777]: E0912 23:06:25.615131 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.615240 kubelet[2777]: W0912 23:06:25.615145 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.615240 kubelet[2777]: E0912 23:06:25.615155 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.615389 kubelet[2777]: E0912 23:06:25.615359 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.615389 kubelet[2777]: W0912 23:06:25.615383 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.615456 kubelet[2777]: E0912 23:06:25.615393 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.615598 kubelet[2777]: E0912 23:06:25.615580 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.615598 kubelet[2777]: W0912 23:06:25.615593 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.615678 kubelet[2777]: E0912 23:06:25.615604 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.615799 kubelet[2777]: E0912 23:06:25.615781 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.615799 kubelet[2777]: W0912 23:06:25.615794 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.615885 kubelet[2777]: E0912 23:06:25.615805 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.616040 kubelet[2777]: E0912 23:06:25.616017 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.616040 kubelet[2777]: W0912 23:06:25.616033 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.616134 kubelet[2777]: E0912 23:06:25.616046 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.616279 kubelet[2777]: E0912 23:06:25.616260 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.616279 kubelet[2777]: W0912 23:06:25.616273 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.616346 kubelet[2777]: E0912 23:06:25.616284 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.616491 kubelet[2777]: E0912 23:06:25.616472 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.616491 kubelet[2777]: W0912 23:06:25.616485 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.616569 kubelet[2777]: E0912 23:06:25.616495 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.623600 kubelet[2777]: E0912 23:06:25.621572 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.623600 kubelet[2777]: W0912 23:06:25.621622 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.623600 kubelet[2777]: E0912 23:06:25.621657 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.623600 kubelet[2777]: E0912 23:06:25.622076 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.623600 kubelet[2777]: W0912 23:06:25.622086 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.623600 kubelet[2777]: E0912 23:06:25.622098 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.623600 kubelet[2777]: E0912 23:06:25.622313 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.623600 kubelet[2777]: W0912 23:06:25.622322 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.623600 kubelet[2777]: E0912 23:06:25.622332 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.623600 kubelet[2777]: E0912 23:06:25.622528 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.624131 kubelet[2777]: W0912 23:06:25.622536 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.624131 kubelet[2777]: E0912 23:06:25.622546 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.626579 kubelet[2777]: E0912 23:06:25.626503 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.626579 kubelet[2777]: W0912 23:06:25.626545 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.626579 kubelet[2777]: E0912 23:06:25.626578 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.626579 kubelet[2777]: E0912 23:06:25.626949 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.626579 kubelet[2777]: W0912 23:06:25.626960 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.626579 kubelet[2777]: E0912 23:06:25.626973 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.627601 kubelet[2777]: E0912 23:06:25.627470 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.627601 kubelet[2777]: W0912 23:06:25.627481 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.627601 kubelet[2777]: E0912 23:06:25.627493 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.627699 kubelet[2777]: E0912 23:06:25.627675 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.627699 kubelet[2777]: W0912 23:06:25.627683 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.627699 kubelet[2777]: E0912 23:06:25.627693 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.627939 kubelet[2777]: E0912 23:06:25.627875 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.627939 kubelet[2777]: W0912 23:06:25.627884 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.627939 kubelet[2777]: E0912 23:06:25.627894 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.628802 kubelet[2777]: E0912 23:06:25.628090 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.628802 kubelet[2777]: W0912 23:06:25.628100 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.628802 kubelet[2777]: E0912 23:06:25.628112 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.652945 kubelet[2777]: E0912 23:06:25.652890 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.652945 kubelet[2777]: W0912 23:06:25.652926 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.653147 kubelet[2777]: E0912 23:06:25.652966 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.653147 kubelet[2777]: I0912 23:06:25.653022 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p658l\" (UniqueName: \"kubernetes.io/projected/366b8825-9aaa-42c1-b70d-ae14ae3ca227-kube-api-access-p658l\") pod \"csi-node-driver-2rn62\" (UID: \"366b8825-9aaa-42c1-b70d-ae14ae3ca227\") " pod="calico-system/csi-node-driver-2rn62" Sep 12 23:06:25.655598 kubelet[2777]: E0912 23:06:25.655531 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.655598 kubelet[2777]: W0912 23:06:25.655562 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.655598 kubelet[2777]: E0912 23:06:25.655607 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.655905 kubelet[2777]: E0912 23:06:25.655842 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.655905 kubelet[2777]: W0912 23:06:25.655892 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.655971 kubelet[2777]: E0912 23:06:25.655939 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.656331 kubelet[2777]: E0912 23:06:25.656255 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.656331 kubelet[2777]: W0912 23:06:25.656274 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.656331 kubelet[2777]: E0912 23:06:25.656288 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.656472 kubelet[2777]: I0912 23:06:25.656338 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/366b8825-9aaa-42c1-b70d-ae14ae3ca227-registration-dir\") pod \"csi-node-driver-2rn62\" (UID: \"366b8825-9aaa-42c1-b70d-ae14ae3ca227\") " pod="calico-system/csi-node-driver-2rn62" Sep 12 23:06:25.656666 kubelet[2777]: E0912 23:06:25.656625 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.656666 kubelet[2777]: W0912 23:06:25.656659 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.656785 kubelet[2777]: E0912 23:06:25.656691 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.656785 kubelet[2777]: I0912 23:06:25.656751 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/366b8825-9aaa-42c1-b70d-ae14ae3ca227-kubelet-dir\") pod \"csi-node-driver-2rn62\" (UID: \"366b8825-9aaa-42c1-b70d-ae14ae3ca227\") " pod="calico-system/csi-node-driver-2rn62" Sep 12 23:06:25.658218 kubelet[2777]: E0912 23:06:25.658158 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.658218 kubelet[2777]: W0912 23:06:25.658209 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.658308 kubelet[2777]: E0912 23:06:25.658263 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.658340 kubelet[2777]: I0912 23:06:25.658312 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/366b8825-9aaa-42c1-b70d-ae14ae3ca227-socket-dir\") pod \"csi-node-driver-2rn62\" (UID: \"366b8825-9aaa-42c1-b70d-ae14ae3ca227\") " pod="calico-system/csi-node-driver-2rn62" Sep 12 23:06:25.660982 kubelet[2777]: E0912 23:06:25.660609 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.660982 kubelet[2777]: W0912 23:06:25.660687 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.660982 kubelet[2777]: E0912 23:06:25.660769 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.660982 kubelet[2777]: I0912 23:06:25.660842 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/366b8825-9aaa-42c1-b70d-ae14ae3ca227-varrun\") pod \"csi-node-driver-2rn62\" (UID: \"366b8825-9aaa-42c1-b70d-ae14ae3ca227\") " pod="calico-system/csi-node-driver-2rn62" Sep 12 23:06:25.662101 kubelet[2777]: E0912 23:06:25.661247 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.662220 kubelet[2777]: W0912 23:06:25.662107 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.662220 kubelet[2777]: E0912 23:06:25.662179 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.663558 kubelet[2777]: E0912 23:06:25.663499 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.663558 kubelet[2777]: W0912 23:06:25.663515 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.663974 kubelet[2777]: E0912 23:06:25.663901 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.665648 kubelet[2777]: E0912 23:06:25.665600 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.665648 kubelet[2777]: W0912 23:06:25.665616 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.665818 kubelet[2777]: E0912 23:06:25.665744 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.671194 kubelet[2777]: E0912 23:06:25.670704 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.671194 kubelet[2777]: W0912 23:06:25.670736 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.671194 kubelet[2777]: E0912 23:06:25.671040 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.671989 kubelet[2777]: E0912 23:06:25.671636 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.671989 kubelet[2777]: W0912 23:06:25.671653 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.671989 kubelet[2777]: E0912 23:06:25.671667 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.674133 kubelet[2777]: E0912 23:06:25.673969 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.674133 kubelet[2777]: W0912 23:06:25.673987 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.674133 kubelet[2777]: E0912 23:06:25.673999 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.674895 kubelet[2777]: E0912 23:06:25.674530 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.674895 kubelet[2777]: W0912 23:06:25.674693 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.675207 kubelet[2777]: E0912 23:06:25.675069 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.678917 kubelet[2777]: E0912 23:06:25.678337 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.679010 kubelet[2777]: W0912 23:06:25.678918 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.679010 kubelet[2777]: E0912 23:06:25.678940 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.763625 kubelet[2777]: E0912 23:06:25.763560 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.763625 kubelet[2777]: W0912 23:06:25.763598 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.763625 kubelet[2777]: E0912 23:06:25.763626 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.763950 kubelet[2777]: E0912 23:06:25.763846 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.763950 kubelet[2777]: W0912 23:06:25.763904 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.763950 kubelet[2777]: E0912 23:06:25.763914 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.764143 kubelet[2777]: E0912 23:06:25.764120 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.764143 kubelet[2777]: W0912 23:06:25.764133 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.764143 kubelet[2777]: E0912 23:06:25.764142 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.764612 kubelet[2777]: E0912 23:06:25.764495 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.764612 kubelet[2777]: W0912 23:06:25.764534 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.764612 kubelet[2777]: E0912 23:06:25.764580 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.765076 kubelet[2777]: E0912 23:06:25.765040 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.765076 kubelet[2777]: W0912 23:06:25.765058 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.765227 kubelet[2777]: E0912 23:06:25.765153 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.765768 kubelet[2777]: E0912 23:06:25.765734 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.765768 kubelet[2777]: W0912 23:06:25.765749 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.765914 kubelet[2777]: E0912 23:06:25.765889 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.766615 kubelet[2777]: E0912 23:06:25.766447 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.766615 kubelet[2777]: W0912 23:06:25.766467 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.766615 kubelet[2777]: E0912 23:06:25.766678 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.768179 kubelet[2777]: E0912 23:06:25.767618 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.768179 kubelet[2777]: W0912 23:06:25.767640 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.768179 kubelet[2777]: E0912 23:06:25.767713 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.768179 kubelet[2777]: E0912 23:06:25.767890 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.768179 kubelet[2777]: W0912 23:06:25.767902 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.768179 kubelet[2777]: E0912 23:06:25.767936 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.768179 kubelet[2777]: E0912 23:06:25.768085 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.768179 kubelet[2777]: W0912 23:06:25.768094 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.768179 kubelet[2777]: E0912 23:06:25.768168 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.768604 kubelet[2777]: E0912 23:06:25.768379 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.768604 kubelet[2777]: W0912 23:06:25.768395 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.768604 kubelet[2777]: E0912 23:06:25.768434 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.768973 kubelet[2777]: E0912 23:06:25.768909 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.768973 kubelet[2777]: W0912 23:06:25.768925 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.768973 kubelet[2777]: E0912 23:06:25.768969 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.769530 kubelet[2777]: E0912 23:06:25.769412 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.769530 kubelet[2777]: W0912 23:06:25.769435 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.770248 kubelet[2777]: E0912 23:06:25.770040 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.770827 kubelet[2777]: E0912 23:06:25.770754 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.770827 kubelet[2777]: W0912 23:06:25.770770 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.771276 kubelet[2777]: E0912 23:06:25.771216 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.772290 kubelet[2777]: E0912 23:06:25.771564 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.772290 kubelet[2777]: W0912 23:06:25.771768 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.773346 kubelet[2777]: E0912 23:06:25.773191 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.773402 kubelet[2777]: E0912 23:06:25.773302 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.773402 kubelet[2777]: W0912 23:06:25.773391 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.773841 kubelet[2777]: E0912 23:06:25.773528 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.773841 kubelet[2777]: E0912 23:06:25.773841 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.773841 kubelet[2777]: W0912 23:06:25.773872 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.774208 kubelet[2777]: E0912 23:06:25.774089 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.775584 kubelet[2777]: E0912 23:06:25.775085 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.775584 kubelet[2777]: W0912 23:06:25.775240 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.776757 kubelet[2777]: E0912 23:06:25.776156 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.776757 kubelet[2777]: W0912 23:06:25.776169 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.776757 kubelet[2777]: E0912 23:06:25.776237 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.776757 kubelet[2777]: E0912 23:06:25.776333 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.776757 kubelet[2777]: W0912 23:06:25.776341 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.776757 kubelet[2777]: E0912 23:06:25.776385 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.776757 kubelet[2777]: E0912 23:06:25.776631 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.784482 kubelet[2777]: E0912 23:06:25.778297 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.784482 kubelet[2777]: W0912 23:06:25.778322 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.784482 kubelet[2777]: E0912 23:06:25.778360 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.784482 kubelet[2777]: E0912 23:06:25.778745 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.784482 kubelet[2777]: W0912 23:06:25.778758 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.784482 kubelet[2777]: E0912 23:06:25.778837 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.784482 kubelet[2777]: E0912 23:06:25.781736 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.784482 kubelet[2777]: W0912 23:06:25.781769 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.784482 kubelet[2777]: E0912 23:06:25.781888 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.784482 kubelet[2777]: E0912 23:06:25.782658 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.784953 kubelet[2777]: W0912 23:06:25.782673 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.784953 kubelet[2777]: E0912 23:06:25.782921 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.784953 kubelet[2777]: E0912 23:06:25.782990 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.784953 kubelet[2777]: W0912 23:06:25.782997 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.784953 kubelet[2777]: E0912 23:06:25.783007 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.869667 kubelet[2777]: E0912 23:06:25.866600 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.869667 kubelet[2777]: W0912 23:06:25.868201 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.869667 kubelet[2777]: E0912 23:06:25.869626 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:25.973139 kubelet[2777]: E0912 23:06:25.973034 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:25.973139 kubelet[2777]: W0912 23:06:25.973064 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:25.973139 kubelet[2777]: E0912 23:06:25.973089 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:26.007100 kubelet[2777]: E0912 23:06:26.006718 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:06:26.007100 kubelet[2777]: W0912 23:06:26.006780 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:06:26.007100 kubelet[2777]: E0912 23:06:26.006818 2777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:06:26.120228 containerd[1580]: time="2025-09-12T23:06:26.117975973Z" level=info msg="connecting to shim 9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec" address="unix:///run/containerd/s/bee692022464fbc0ea60e22f259ba606b3252bdd94485050e2f9d453577633b0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:26.128486 containerd[1580]: time="2025-09-12T23:06:26.128039676Z" level=info msg="connecting to shim b61a5e16832d361e1cb01045fdd042bc10d038ee03b5c77f68e49499f90f8ff3" address="unix:///run/containerd/s/642964f69529a19217763ed4853d9621e23255ae301874c36ec5d56fdf8e25c8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:26.173094 systemd[1]: Started cri-containerd-9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec.scope - libcontainer container 9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec. Sep 12 23:06:26.193842 systemd[1]: Started cri-containerd-b61a5e16832d361e1cb01045fdd042bc10d038ee03b5c77f68e49499f90f8ff3.scope - libcontainer container b61a5e16832d361e1cb01045fdd042bc10d038ee03b5c77f68e49499f90f8ff3. Sep 12 23:06:26.262822 containerd[1580]: time="2025-09-12T23:06:26.262752594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hzlf2,Uid:0f294c62-5d81-4058-8de3-171454204acc,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec\"" Sep 12 23:06:26.265674 containerd[1580]: time="2025-09-12T23:06:26.265115575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:06:26.315184 containerd[1580]: time="2025-09-12T23:06:26.315120641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f758cdbf-j4hjd,Uid:5d841a48-bc98-4b46-bb75-a6c08aae1d2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b61a5e16832d361e1cb01045fdd042bc10d038ee03b5c77f68e49499f90f8ff3\"" Sep 12 23:06:26.316110 kubelet[2777]: E0912 23:06:26.316063 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:26.837505 kubelet[2777]: E0912 23:06:26.836677 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:27.827620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353087448.mount: Deactivated successfully. Sep 12 23:06:28.525295 containerd[1580]: time="2025-09-12T23:06:28.525221378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:28.526797 containerd[1580]: time="2025-09-12T23:06:28.526736485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 12 23:06:28.529990 containerd[1580]: time="2025-09-12T23:06:28.529961244Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:28.532914 containerd[1580]: time="2025-09-12T23:06:28.532869388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:28.533503 containerd[1580]: time="2025-09-12T23:06:28.533468904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.268308014s" Sep 12 23:06:28.533586 containerd[1580]: time="2025-09-12T23:06:28.533535459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 23:06:28.534508 containerd[1580]: time="2025-09-12T23:06:28.534481096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:06:28.536037 containerd[1580]: time="2025-09-12T23:06:28.536004458Z" level=info msg="CreateContainer within sandbox \"9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:06:28.552547 containerd[1580]: time="2025-09-12T23:06:28.552488830Z" level=info msg="Container 3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:28.566126 containerd[1580]: time="2025-09-12T23:06:28.566075256Z" level=info msg="CreateContainer within sandbox \"9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6\"" Sep 12 23:06:28.566476 containerd[1580]: time="2025-09-12T23:06:28.566451182Z" level=info msg="StartContainer for \"3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6\"" Sep 12 23:06:28.567988 containerd[1580]: time="2025-09-12T23:06:28.567950751Z" level=info msg="connecting to shim 3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6" address="unix:///run/containerd/s/bee692022464fbc0ea60e22f259ba606b3252bdd94485050e2f9d453577633b0" protocol=ttrpc version=3 Sep 12 23:06:28.590037 systemd[1]: Started cri-containerd-3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6.scope - libcontainer container 3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6. Sep 12 23:06:28.636878 containerd[1580]: time="2025-09-12T23:06:28.636501917Z" level=info msg="StartContainer for \"3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6\" returns successfully" Sep 12 23:06:28.646994 systemd[1]: cri-containerd-3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6.scope: Deactivated successfully. Sep 12 23:06:28.648929 containerd[1580]: time="2025-09-12T23:06:28.648897406Z" level=info msg="received exit event container_id:\"3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6\" id:\"3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6\" pid:3384 exited_at:{seconds:1757718388 nanos:648437020}" Sep 12 23:06:28.649016 containerd[1580]: time="2025-09-12T23:06:28.648922573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6\" id:\"3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6\" pid:3384 exited_at:{seconds:1757718388 nanos:648437020}" Sep 12 23:06:28.799639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c0a1812d6b6c824f8fbfd70bc5da96c53ec799ba97970be073fc52e0d3bbdb6-rootfs.mount: Deactivated successfully. Sep 12 23:06:28.837035 kubelet[2777]: E0912 23:06:28.836960 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:30.836313 kubelet[2777]: E0912 23:06:30.836206 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:32.836353 kubelet[2777]: E0912 23:06:32.836298 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:32.864659 containerd[1580]: time="2025-09-12T23:06:32.864587687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:32.865334 containerd[1580]: time="2025-09-12T23:06:32.865279697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 12 23:06:32.866359 containerd[1580]: time="2025-09-12T23:06:32.866329468Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:32.868375 containerd[1580]: time="2025-09-12T23:06:32.868343160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:32.868946 containerd[1580]: time="2025-09-12T23:06:32.868910626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.334398571s" Sep 12 23:06:32.868946 containerd[1580]: time="2025-09-12T23:06:32.868941284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 23:06:32.869879 containerd[1580]: time="2025-09-12T23:06:32.869841044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:06:32.879702 containerd[1580]: time="2025-09-12T23:06:32.879645066Z" level=info msg="CreateContainer within sandbox \"b61a5e16832d361e1cb01045fdd042bc10d038ee03b5c77f68e49499f90f8ff3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:06:32.889038 containerd[1580]: time="2025-09-12T23:06:32.889005886Z" level=info msg="Container dbc3f7c35d6b4748216bb519d2b24fc56d1225b6d639c84a28baf33c8839dfd1: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:32.899510 containerd[1580]: time="2025-09-12T23:06:32.899449629Z" level=info msg="CreateContainer within sandbox \"b61a5e16832d361e1cb01045fdd042bc10d038ee03b5c77f68e49499f90f8ff3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dbc3f7c35d6b4748216bb519d2b24fc56d1225b6d639c84a28baf33c8839dfd1\"" Sep 12 23:06:32.900022 containerd[1580]: time="2025-09-12T23:06:32.899996667Z" level=info msg="StartContainer for \"dbc3f7c35d6b4748216bb519d2b24fc56d1225b6d639c84a28baf33c8839dfd1\"" Sep 12 23:06:32.901108 containerd[1580]: time="2025-09-12T23:06:32.901083619Z" level=info msg="connecting to shim dbc3f7c35d6b4748216bb519d2b24fc56d1225b6d639c84a28baf33c8839dfd1" address="unix:///run/containerd/s/642964f69529a19217763ed4853d9621e23255ae301874c36ec5d56fdf8e25c8" protocol=ttrpc version=3 Sep 12 23:06:32.923008 systemd[1]: Started cri-containerd-dbc3f7c35d6b4748216bb519d2b24fc56d1225b6d639c84a28baf33c8839dfd1.scope - libcontainer container dbc3f7c35d6b4748216bb519d2b24fc56d1225b6d639c84a28baf33c8839dfd1. Sep 12 23:06:32.983640 containerd[1580]: time="2025-09-12T23:06:32.983594808Z" level=info msg="StartContainer for \"dbc3f7c35d6b4748216bb519d2b24fc56d1225b6d639c84a28baf33c8839dfd1\" returns successfully" Sep 12 23:06:33.977010 kubelet[2777]: E0912 23:06:33.976952 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:33.990353 kubelet[2777]: I0912 23:06:33.990239 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75f758cdbf-j4hjd" podStartSLOduration=3.437648508 podStartE2EDuration="9.990217467s" podCreationTimestamp="2025-09-12 23:06:24 +0000 UTC" firstStartedPulling="2025-09-12 23:06:26.317176925 +0000 UTC m=+20.585207378" lastFinishedPulling="2025-09-12 23:06:32.869745884 +0000 UTC m=+27.137776337" observedRunningTime="2025-09-12 23:06:33.990065643 +0000 UTC m=+28.258096106" watchObservedRunningTime="2025-09-12 23:06:33.990217467 +0000 UTC m=+28.258247920" Sep 12 23:06:34.836587 kubelet[2777]: E0912 23:06:34.836511 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:34.980072 kubelet[2777]: E0912 23:06:34.979982 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:35.982166 kubelet[2777]: E0912 23:06:35.982124 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:37.138787 kubelet[2777]: E0912 23:06:37.138701 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:38.305781 containerd[1580]: time="2025-09-12T23:06:38.305164443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:38.307303 containerd[1580]: time="2025-09-12T23:06:38.307260638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 23:06:38.309153 containerd[1580]: time="2025-09-12T23:06:38.309123575Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:38.311724 containerd[1580]: time="2025-09-12T23:06:38.311668773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:38.312368 containerd[1580]: time="2025-09-12T23:06:38.312337628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.442449376s" Sep 12 23:06:38.312422 containerd[1580]: time="2025-09-12T23:06:38.312372554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 23:06:38.314800 containerd[1580]: time="2025-09-12T23:06:38.314752772Z" level=info msg="CreateContainer within sandbox \"9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:06:38.326347 containerd[1580]: time="2025-09-12T23:06:38.326278188Z" level=info msg="Container 1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:38.339429 containerd[1580]: time="2025-09-12T23:06:38.339355947Z" level=info msg="CreateContainer within sandbox \"9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e\"" Sep 12 23:06:38.340358 containerd[1580]: time="2025-09-12T23:06:38.340329844Z" level=info msg="StartContainer for \"1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e\"" Sep 12 23:06:38.341913 containerd[1580]: time="2025-09-12T23:06:38.341869605Z" level=info msg="connecting to shim 1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e" address="unix:///run/containerd/s/bee692022464fbc0ea60e22f259ba606b3252bdd94485050e2f9d453577633b0" protocol=ttrpc version=3 Sep 12 23:06:38.375228 systemd[1]: Started cri-containerd-1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e.scope - libcontainer container 1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e. Sep 12 23:06:38.423122 containerd[1580]: time="2025-09-12T23:06:38.423067519Z" level=info msg="StartContainer for \"1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e\" returns successfully" Sep 12 23:06:38.836365 kubelet[2777]: E0912 23:06:38.836291 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:39.575953 systemd[1]: cri-containerd-1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e.scope: Deactivated successfully. Sep 12 23:06:39.576265 systemd[1]: cri-containerd-1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e.scope: Consumed 599ms CPU time, 181M memory peak, 3.6M read from disk, 171.3M written to disk. Sep 12 23:06:39.577232 containerd[1580]: time="2025-09-12T23:06:39.576894508Z" level=info msg="received exit event container_id:\"1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e\" id:\"1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e\" pid:3487 exited_at:{seconds:1757718399 nanos:576639630}" Sep 12 23:06:39.577990 containerd[1580]: time="2025-09-12T23:06:39.577964477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e\" id:\"1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e\" pid:3487 exited_at:{seconds:1757718399 nanos:576639630}" Sep 12 23:06:39.599268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e910548d11405c47b302f64aabfcf8149ded81c5505065cf1e36c0ccb28c38e-rootfs.mount: Deactivated successfully. Sep 12 23:06:39.656896 kubelet[2777]: I0912 23:06:39.656846 2777 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 23:06:39.976176 systemd[1]: Created slice kubepods-burstable-podf8d1365e_c840_4eb2_b77c_dd3f9f92d921.slice - libcontainer container kubepods-burstable-podf8d1365e_c840_4eb2_b77c_dd3f9f92d921.slice. Sep 12 23:06:39.983839 systemd[1]: Created slice kubepods-besteffort-pod12d78903_318f_42a6_b332_db30891886a1.slice - libcontainer container kubepods-besteffort-pod12d78903_318f_42a6_b332_db30891886a1.slice. Sep 12 23:06:39.990422 systemd[1]: Created slice kubepods-besteffort-pod42a72dd3_c9f6_4f6d_9f67_e39fa8f8eadd.slice - libcontainer container kubepods-besteffort-pod42a72dd3_c9f6_4f6d_9f67_e39fa8f8eadd.slice. Sep 12 23:06:39.996533 systemd[1]: Created slice kubepods-besteffort-pod58402002_2af2_4dc9_a380_750dad2c8d3d.slice - libcontainer container kubepods-besteffort-pod58402002_2af2_4dc9_a380_750dad2c8d3d.slice. Sep 12 23:06:40.001989 systemd[1]: Created slice kubepods-burstable-podec887171_1133_4f74_8a61_d4663af982e5.slice - libcontainer container kubepods-burstable-podec887171_1133_4f74_8a61_d4663af982e5.slice. Sep 12 23:06:40.007877 systemd[1]: Created slice kubepods-besteffort-pod53c129ce_c8b6_4421_891f_027eaa23117b.slice - libcontainer container kubepods-besteffort-pod53c129ce_c8b6_4421_891f_027eaa23117b.slice. Sep 12 23:06:40.011813 systemd[1]: Created slice kubepods-besteffort-pod3280d612_04da_45ba_8f1d_ca150c949632.slice - libcontainer container kubepods-besteffort-pod3280d612_04da_45ba_8f1d_ca150c949632.slice. Sep 12 23:06:40.081891 kubelet[2777]: I0912 23:06:40.081821 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7smh\" (UniqueName: \"kubernetes.io/projected/ec887171-1133-4f74-8a61-d4663af982e5-kube-api-access-t7smh\") pod \"coredns-7c65d6cfc9-wrdr9\" (UID: \"ec887171-1133-4f74-8a61-d4663af982e5\") " pod="kube-system/coredns-7c65d6cfc9-wrdr9" Sep 12 23:06:40.082441 kubelet[2777]: I0912 23:06:40.082390 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-ca-bundle\") pod \"whisker-56d9769795-5mff2\" (UID: \"58402002-2af2-4dc9-a380-750dad2c8d3d\") " pod="calico-system/whisker-56d9769795-5mff2" Sep 12 23:06:40.082441 kubelet[2777]: I0912 23:06:40.082416 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgklw\" (UniqueName: \"kubernetes.io/projected/f8d1365e-c840-4eb2-b77c-dd3f9f92d921-kube-api-access-jgklw\") pod \"coredns-7c65d6cfc9-7r256\" (UID: \"f8d1365e-c840-4eb2-b77c-dd3f9f92d921\") " pod="kube-system/coredns-7c65d6cfc9-7r256" Sep 12 23:06:40.082441 kubelet[2777]: I0912 23:06:40.082450 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd-config\") pod \"goldmane-7988f88666-2n8tf\" (UID: \"42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd\") " pod="calico-system/goldmane-7988f88666-2n8tf" Sep 12 23:06:40.082684 kubelet[2777]: I0912 23:06:40.082468 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5584\" (UniqueName: \"kubernetes.io/projected/12d78903-318f-42a6-b332-db30891886a1-kube-api-access-v5584\") pod \"calico-kube-controllers-5c5cf69d5-gn9vn\" (UID: \"12d78903-318f-42a6-b332-db30891886a1\") " pod="calico-system/calico-kube-controllers-5c5cf69d5-gn9vn" Sep 12 23:06:40.082684 kubelet[2777]: I0912 23:06:40.082491 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txnr\" (UniqueName: \"kubernetes.io/projected/58402002-2af2-4dc9-a380-750dad2c8d3d-kube-api-access-8txnr\") pod \"whisker-56d9769795-5mff2\" (UID: \"58402002-2af2-4dc9-a380-750dad2c8d3d\") " pod="calico-system/whisker-56d9769795-5mff2" Sep 12 23:06:40.082684 kubelet[2777]: I0912 23:06:40.082510 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec887171-1133-4f74-8a61-d4663af982e5-config-volume\") pod \"coredns-7c65d6cfc9-wrdr9\" (UID: \"ec887171-1133-4f74-8a61-d4663af982e5\") " pod="kube-system/coredns-7c65d6cfc9-wrdr9" Sep 12 23:06:40.082684 kubelet[2777]: I0912 23:06:40.082527 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-backend-key-pair\") pod \"whisker-56d9769795-5mff2\" (UID: \"58402002-2af2-4dc9-a380-750dad2c8d3d\") " pod="calico-system/whisker-56d9769795-5mff2" Sep 12 23:06:40.082684 kubelet[2777]: I0912 23:06:40.082602 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12d78903-318f-42a6-b332-db30891886a1-tigera-ca-bundle\") pod \"calico-kube-controllers-5c5cf69d5-gn9vn\" (UID: \"12d78903-318f-42a6-b332-db30891886a1\") " pod="calico-system/calico-kube-controllers-5c5cf69d5-gn9vn" Sep 12 23:06:40.082810 kubelet[2777]: I0912 23:06:40.082677 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd-goldmane-ca-bundle\") pod \"goldmane-7988f88666-2n8tf\" (UID: \"42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd\") " pod="calico-system/goldmane-7988f88666-2n8tf" Sep 12 23:06:40.082810 kubelet[2777]: I0912 23:06:40.082698 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd-goldmane-key-pair\") pod \"goldmane-7988f88666-2n8tf\" (UID: \"42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd\") " pod="calico-system/goldmane-7988f88666-2n8tf" Sep 12 23:06:40.082810 kubelet[2777]: I0912 23:06:40.082719 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3280d612-04da-45ba-8f1d-ca150c949632-calico-apiserver-certs\") pod \"calico-apiserver-7dc657dddb-t9mck\" (UID: \"3280d612-04da-45ba-8f1d-ca150c949632\") " pod="calico-apiserver/calico-apiserver-7dc657dddb-t9mck" Sep 12 23:06:40.082810 kubelet[2777]: I0912 23:06:40.082758 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8d1365e-c840-4eb2-b77c-dd3f9f92d921-config-volume\") pod \"coredns-7c65d6cfc9-7r256\" (UID: \"f8d1365e-c840-4eb2-b77c-dd3f9f92d921\") " pod="kube-system/coredns-7c65d6cfc9-7r256" Sep 12 23:06:40.082810 kubelet[2777]: I0912 23:06:40.082790 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53c129ce-c8b6-4421-891f-027eaa23117b-calico-apiserver-certs\") pod \"calico-apiserver-7dc657dddb-f7sh6\" (UID: \"53c129ce-c8b6-4421-891f-027eaa23117b\") " pod="calico-apiserver/calico-apiserver-7dc657dddb-f7sh6" Sep 12 23:06:40.082973 kubelet[2777]: I0912 23:06:40.082807 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpfg\" (UniqueName: \"kubernetes.io/projected/3280d612-04da-45ba-8f1d-ca150c949632-kube-api-access-jgpfg\") pod \"calico-apiserver-7dc657dddb-t9mck\" (UID: \"3280d612-04da-45ba-8f1d-ca150c949632\") " pod="calico-apiserver/calico-apiserver-7dc657dddb-t9mck" Sep 12 23:06:40.082973 kubelet[2777]: I0912 23:06:40.082840 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4sh\" (UniqueName: \"kubernetes.io/projected/42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd-kube-api-access-tw4sh\") pod \"goldmane-7988f88666-2n8tf\" (UID: \"42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd\") " pod="calico-system/goldmane-7988f88666-2n8tf" Sep 12 23:06:40.082973 kubelet[2777]: I0912 23:06:40.082903 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd276\" (UniqueName: \"kubernetes.io/projected/53c129ce-c8b6-4421-891f-027eaa23117b-kube-api-access-dd276\") pod \"calico-apiserver-7dc657dddb-f7sh6\" (UID: \"53c129ce-c8b6-4421-891f-027eaa23117b\") " pod="calico-apiserver/calico-apiserver-7dc657dddb-f7sh6" Sep 12 23:06:40.279504 kubelet[2777]: E0912 23:06:40.279309 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:40.280151 containerd[1580]: time="2025-09-12T23:06:40.280066121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7r256,Uid:f8d1365e-c840-4eb2-b77c-dd3f9f92d921,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:40.287844 containerd[1580]: time="2025-09-12T23:06:40.287791993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c5cf69d5-gn9vn,Uid:12d78903-318f-42a6-b332-db30891886a1,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:40.296704 containerd[1580]: time="2025-09-12T23:06:40.296651632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2n8tf,Uid:42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:40.301543 containerd[1580]: time="2025-09-12T23:06:40.301509891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d9769795-5mff2,Uid:58402002-2af2-4dc9-a380-750dad2c8d3d,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:40.305016 kubelet[2777]: E0912 23:06:40.304927 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:40.306226 containerd[1580]: time="2025-09-12T23:06:40.306130092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrdr9,Uid:ec887171-1133-4f74-8a61-d4663af982e5,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:40.311606 containerd[1580]: time="2025-09-12T23:06:40.311567308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-f7sh6,Uid:53c129ce-c8b6-4421-891f-027eaa23117b,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:06:40.315772 containerd[1580]: time="2025-09-12T23:06:40.315598343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-t9mck,Uid:3280d612-04da-45ba-8f1d-ca150c949632,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:06:40.425217 containerd[1580]: time="2025-09-12T23:06:40.425151601Z" level=error msg="Failed to destroy network for sandbox \"66ade9d3ce8c08c0b223f765f9c02a4db69250f20f3662db9505714b31c663a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.426836 containerd[1580]: time="2025-09-12T23:06:40.426783143Z" level=error msg="Failed to destroy network for sandbox \"de8dab9b45315050a719d9b2ba8a0c8a1046397a139800d98795c1fdcdfa860e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.444263 containerd[1580]: time="2025-09-12T23:06:40.443896645Z" level=error msg="Failed to destroy network for sandbox \"383fe982309e3d3bc3dceba5b11a444e7b67d3cb8c51754acb4cb6bd6839a250\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.444507 containerd[1580]: time="2025-09-12T23:06:40.443973189Z" level=error msg="Failed to destroy network for sandbox \"bd104a1aeaac573e90abab312daf43aa6c06e335f06a1a23c14ce53eb5e481c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.445108 containerd[1580]: time="2025-09-12T23:06:40.444921278Z" level=error msg="Failed to destroy network for sandbox \"cfe338fb76c3b0ef7429c3d5fcd569eb48338b60ced2025b9e26de2c833dc610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.446233 containerd[1580]: time="2025-09-12T23:06:40.446202262Z" level=error msg="Failed to destroy network for sandbox \"5f79ba8b5d60cc27a39c3323addbfc144e0b017fc41870e91f0bd283d1b65bbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.450202 containerd[1580]: time="2025-09-12T23:06:40.450149281Z" level=error msg="Failed to destroy network for sandbox \"39c7f9e8fe4f3eb43d214cf6f6d131ad066028ee498068b6f76f11f5894bb34c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.485810 containerd[1580]: time="2025-09-12T23:06:40.485718393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-f7sh6,Uid:53c129ce-c8b6-4421-891f-027eaa23117b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ade9d3ce8c08c0b223f765f9c02a4db69250f20f3662db9505714b31c663a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.494235 kubelet[2777]: E0912 23:06:40.494175 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ade9d3ce8c08c0b223f765f9c02a4db69250f20f3662db9505714b31c663a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.494323 kubelet[2777]: E0912 23:06:40.494277 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ade9d3ce8c08c0b223f765f9c02a4db69250f20f3662db9505714b31c663a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc657dddb-f7sh6" Sep 12 23:06:40.494323 kubelet[2777]: E0912 23:06:40.494304 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ade9d3ce8c08c0b223f765f9c02a4db69250f20f3662db9505714b31c663a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc657dddb-f7sh6" Sep 12 23:06:40.494415 kubelet[2777]: E0912 23:06:40.494352 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc657dddb-f7sh6_calico-apiserver(53c129ce-c8b6-4421-891f-027eaa23117b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc657dddb-f7sh6_calico-apiserver(53c129ce-c8b6-4421-891f-027eaa23117b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66ade9d3ce8c08c0b223f765f9c02a4db69250f20f3662db9505714b31c663a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc657dddb-f7sh6" podUID="53c129ce-c8b6-4421-891f-027eaa23117b" Sep 12 23:06:40.500105 containerd[1580]: time="2025-09-12T23:06:40.500003806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-t9mck,Uid:3280d612-04da-45ba-8f1d-ca150c949632,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8dab9b45315050a719d9b2ba8a0c8a1046397a139800d98795c1fdcdfa860e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.500364 kubelet[2777]: E0912 23:06:40.500317 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8dab9b45315050a719d9b2ba8a0c8a1046397a139800d98795c1fdcdfa860e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.500421 kubelet[2777]: E0912 23:06:40.500399 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8dab9b45315050a719d9b2ba8a0c8a1046397a139800d98795c1fdcdfa860e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc657dddb-t9mck" Sep 12 23:06:40.500473 kubelet[2777]: E0912 23:06:40.500427 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8dab9b45315050a719d9b2ba8a0c8a1046397a139800d98795c1fdcdfa860e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc657dddb-t9mck" Sep 12 23:06:40.500529 kubelet[2777]: E0912 23:06:40.500488 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc657dddb-t9mck_calico-apiserver(3280d612-04da-45ba-8f1d-ca150c949632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc657dddb-t9mck_calico-apiserver(3280d612-04da-45ba-8f1d-ca150c949632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de8dab9b45315050a719d9b2ba8a0c8a1046397a139800d98795c1fdcdfa860e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc657dddb-t9mck" podUID="3280d612-04da-45ba-8f1d-ca150c949632" Sep 12 23:06:40.519587 containerd[1580]: time="2025-09-12T23:06:40.519506181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7r256,Uid:f8d1365e-c840-4eb2-b77c-dd3f9f92d921,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"383fe982309e3d3bc3dceba5b11a444e7b67d3cb8c51754acb4cb6bd6839a250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.519916 kubelet[2777]: E0912 23:06:40.519839 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383fe982309e3d3bc3dceba5b11a444e7b67d3cb8c51754acb4cb6bd6839a250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.520017 kubelet[2777]: E0912 23:06:40.519944 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383fe982309e3d3bc3dceba5b11a444e7b67d3cb8c51754acb4cb6bd6839a250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7r256" Sep 12 23:06:40.520017 kubelet[2777]: E0912 23:06:40.519966 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383fe982309e3d3bc3dceba5b11a444e7b67d3cb8c51754acb4cb6bd6839a250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-7r256" Sep 12 23:06:40.520142 kubelet[2777]: E0912 23:06:40.520025 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-7r256_kube-system(f8d1365e-c840-4eb2-b77c-dd3f9f92d921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-7r256_kube-system(f8d1365e-c840-4eb2-b77c-dd3f9f92d921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"383fe982309e3d3bc3dceba5b11a444e7b67d3cb8c51754acb4cb6bd6839a250\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-7r256" podUID="f8d1365e-c840-4eb2-b77c-dd3f9f92d921" Sep 12 23:06:40.536550 containerd[1580]: time="2025-09-12T23:06:40.536318206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d9769795-5mff2,Uid:58402002-2af2-4dc9-a380-750dad2c8d3d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd104a1aeaac573e90abab312daf43aa6c06e335f06a1a23c14ce53eb5e481c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.537044 kubelet[2777]: E0912 23:06:40.536969 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd104a1aeaac573e90abab312daf43aa6c06e335f06a1a23c14ce53eb5e481c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.537568 kubelet[2777]: E0912 23:06:40.537533 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd104a1aeaac573e90abab312daf43aa6c06e335f06a1a23c14ce53eb5e481c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d9769795-5mff2" Sep 12 23:06:40.537665 kubelet[2777]: E0912 23:06:40.537578 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd104a1aeaac573e90abab312daf43aa6c06e335f06a1a23c14ce53eb5e481c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d9769795-5mff2" Sep 12 23:06:40.537665 kubelet[2777]: E0912 23:06:40.537628 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56d9769795-5mff2_calico-system(58402002-2af2-4dc9-a380-750dad2c8d3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56d9769795-5mff2_calico-system(58402002-2af2-4dc9-a380-750dad2c8d3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd104a1aeaac573e90abab312daf43aa6c06e335f06a1a23c14ce53eb5e481c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d9769795-5mff2" podUID="58402002-2af2-4dc9-a380-750dad2c8d3d" Sep 12 23:06:40.546609 containerd[1580]: time="2025-09-12T23:06:40.546513914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2n8tf,Uid:42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe338fb76c3b0ef7429c3d5fcd569eb48338b60ced2025b9e26de2c833dc610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.546831 kubelet[2777]: E0912 23:06:40.546792 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe338fb76c3b0ef7429c3d5fcd569eb48338b60ced2025b9e26de2c833dc610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.546914 kubelet[2777]: E0912 23:06:40.546835 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe338fb76c3b0ef7429c3d5fcd569eb48338b60ced2025b9e26de2c833dc610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-2n8tf" Sep 12 23:06:40.546914 kubelet[2777]: E0912 23:06:40.546879 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe338fb76c3b0ef7429c3d5fcd569eb48338b60ced2025b9e26de2c833dc610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-2n8tf" Sep 12 23:06:40.546969 kubelet[2777]: E0912 23:06:40.546927 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-2n8tf_calico-system(42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-2n8tf_calico-system(42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfe338fb76c3b0ef7429c3d5fcd569eb48338b60ced2025b9e26de2c833dc610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-2n8tf" podUID="42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd" Sep 12 23:06:40.548636 containerd[1580]: time="2025-09-12T23:06:40.548577687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c5cf69d5-gn9vn,Uid:12d78903-318f-42a6-b332-db30891886a1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f79ba8b5d60cc27a39c3323addbfc144e0b017fc41870e91f0bd283d1b65bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.548897 kubelet[2777]: E0912 23:06:40.548867 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f79ba8b5d60cc27a39c3323addbfc144e0b017fc41870e91f0bd283d1b65bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.548938 kubelet[2777]: E0912 23:06:40.548899 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f79ba8b5d60cc27a39c3323addbfc144e0b017fc41870e91f0bd283d1b65bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c5cf69d5-gn9vn" Sep 12 23:06:40.548938 kubelet[2777]: E0912 23:06:40.548917 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f79ba8b5d60cc27a39c3323addbfc144e0b017fc41870e91f0bd283d1b65bbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c5cf69d5-gn9vn" Sep 12 23:06:40.549000 kubelet[2777]: E0912 23:06:40.548942 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c5cf69d5-gn9vn_calico-system(12d78903-318f-42a6-b332-db30891886a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c5cf69d5-gn9vn_calico-system(12d78903-318f-42a6-b332-db30891886a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f79ba8b5d60cc27a39c3323addbfc144e0b017fc41870e91f0bd283d1b65bbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c5cf69d5-gn9vn" podUID="12d78903-318f-42a6-b332-db30891886a1" Sep 12 23:06:40.549712 containerd[1580]: time="2025-09-12T23:06:40.549659498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrdr9,Uid:ec887171-1133-4f74-8a61-d4663af982e5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c7f9e8fe4f3eb43d214cf6f6d131ad066028ee498068b6f76f11f5894bb34c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.550027 kubelet[2777]: E0912 23:06:40.549964 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c7f9e8fe4f3eb43d214cf6f6d131ad066028ee498068b6f76f11f5894bb34c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.550105 kubelet[2777]: E0912 23:06:40.550052 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c7f9e8fe4f3eb43d214cf6f6d131ad066028ee498068b6f76f11f5894bb34c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wrdr9" Sep 12 23:06:40.550131 kubelet[2777]: E0912 23:06:40.550075 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c7f9e8fe4f3eb43d214cf6f6d131ad066028ee498068b6f76f11f5894bb34c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wrdr9" Sep 12 23:06:40.550195 kubelet[2777]: E0912 23:06:40.550167 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wrdr9_kube-system(ec887171-1133-4f74-8a61-d4663af982e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wrdr9_kube-system(ec887171-1133-4f74-8a61-d4663af982e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39c7f9e8fe4f3eb43d214cf6f6d131ad066028ee498068b6f76f11f5894bb34c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wrdr9" podUID="ec887171-1133-4f74-8a61-d4663af982e5" Sep 12 23:06:40.842909 systemd[1]: Created slice kubepods-besteffort-pod366b8825_9aaa_42c1_b70d_ae14ae3ca227.slice - libcontainer container kubepods-besteffort-pod366b8825_9aaa_42c1_b70d_ae14ae3ca227.slice. Sep 12 23:06:40.845844 containerd[1580]: time="2025-09-12T23:06:40.845807526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rn62,Uid:366b8825-9aaa-42c1-b70d-ae14ae3ca227,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:40.893553 containerd[1580]: time="2025-09-12T23:06:40.893498130Z" level=error msg="Failed to destroy network for sandbox \"3c2026dfbbf3e2ef36a6d1d7609a1b2b6d8a7253116e3ee9f6791abea38b530a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.894951 containerd[1580]: time="2025-09-12T23:06:40.894910140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rn62,Uid:366b8825-9aaa-42c1-b70d-ae14ae3ca227,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2026dfbbf3e2ef36a6d1d7609a1b2b6d8a7253116e3ee9f6791abea38b530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.895222 kubelet[2777]: E0912 23:06:40.895179 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2026dfbbf3e2ef36a6d1d7609a1b2b6d8a7253116e3ee9f6791abea38b530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:06:40.895296 kubelet[2777]: E0912 23:06:40.895237 2777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2026dfbbf3e2ef36a6d1d7609a1b2b6d8a7253116e3ee9f6791abea38b530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rn62" Sep 12 23:06:40.895296 kubelet[2777]: E0912 23:06:40.895257 2777 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c2026dfbbf3e2ef36a6d1d7609a1b2b6d8a7253116e3ee9f6791abea38b530a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rn62" Sep 12 23:06:40.895344 kubelet[2777]: E0912 23:06:40.895298 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rn62_calico-system(366b8825-9aaa-42c1-b70d-ae14ae3ca227)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rn62_calico-system(366b8825-9aaa-42c1-b70d-ae14ae3ca227)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c2026dfbbf3e2ef36a6d1d7609a1b2b6d8a7253116e3ee9f6791abea38b530a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rn62" podUID="366b8825-9aaa-42c1-b70d-ae14ae3ca227" Sep 12 23:06:40.896436 systemd[1]: run-netns-cni\x2df6222835\x2d0220\x2d6dbe\x2da1fa\x2d3e79cb3ed99e.mount: Deactivated successfully. Sep 12 23:06:41.002045 containerd[1580]: time="2025-09-12T23:06:41.001030194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:06:48.537416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502601910.mount: Deactivated successfully. Sep 12 23:06:49.766694 containerd[1580]: time="2025-09-12T23:06:49.766629529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:49.788788 containerd[1580]: time="2025-09-12T23:06:49.788716021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 23:06:49.807944 containerd[1580]: time="2025-09-12T23:06:49.807827541Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:49.830034 containerd[1580]: time="2025-09-12T23:06:49.829969107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:49.830549 containerd[1580]: time="2025-09-12T23:06:49.830501539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.829421352s" Sep 12 23:06:49.830620 containerd[1580]: time="2025-09-12T23:06:49.830552045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 23:06:49.843449 containerd[1580]: time="2025-09-12T23:06:49.843389195Z" level=info msg="CreateContainer within sandbox \"9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:06:49.916482 containerd[1580]: time="2025-09-12T23:06:49.916423594Z" level=info msg="Container e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:49.929317 containerd[1580]: time="2025-09-12T23:06:49.929260915Z" level=info msg="CreateContainer within sandbox \"9d38509fcbe004419e8f73078bb4f851d46059b8dfa38f183aeddb4dcd6bffec\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707\"" Sep 12 23:06:49.929949 containerd[1580]: time="2025-09-12T23:06:49.929901110Z" level=info msg="StartContainer for \"e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707\"" Sep 12 23:06:49.931914 containerd[1580]: time="2025-09-12T23:06:49.931872204Z" level=info msg="connecting to shim e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707" address="unix:///run/containerd/s/bee692022464fbc0ea60e22f259ba606b3252bdd94485050e2f9d453577633b0" protocol=ttrpc version=3 Sep 12 23:06:49.966066 systemd[1]: Started cri-containerd-e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707.scope - libcontainer container e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707. Sep 12 23:06:50.060467 containerd[1580]: time="2025-09-12T23:06:50.060357412Z" level=info msg="StartContainer for \"e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707\" returns successfully" Sep 12 23:06:50.109479 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:06:50.110198 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:06:50.757512 kubelet[2777]: I0912 23:06:50.757431 2777 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8txnr\" (UniqueName: \"kubernetes.io/projected/58402002-2af2-4dc9-a380-750dad2c8d3d-kube-api-access-8txnr\") pod \"58402002-2af2-4dc9-a380-750dad2c8d3d\" (UID: \"58402002-2af2-4dc9-a380-750dad2c8d3d\") " Sep 12 23:06:50.757512 kubelet[2777]: I0912 23:06:50.757482 2777 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-backend-key-pair\") pod \"58402002-2af2-4dc9-a380-750dad2c8d3d\" (UID: \"58402002-2af2-4dc9-a380-750dad2c8d3d\") " Sep 12 23:06:50.757512 kubelet[2777]: I0912 23:06:50.757518 2777 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-ca-bundle\") pod \"58402002-2af2-4dc9-a380-750dad2c8d3d\" (UID: \"58402002-2af2-4dc9-a380-750dad2c8d3d\") " Sep 12 23:06:50.758208 kubelet[2777]: I0912 23:06:50.758115 2777 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "58402002-2af2-4dc9-a380-750dad2c8d3d" (UID: "58402002-2af2-4dc9-a380-750dad2c8d3d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 23:06:50.761274 kubelet[2777]: I0912 23:06:50.761227 2777 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "58402002-2af2-4dc9-a380-750dad2c8d3d" (UID: "58402002-2af2-4dc9-a380-750dad2c8d3d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 23:06:50.761413 kubelet[2777]: I0912 23:06:50.761266 2777 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58402002-2af2-4dc9-a380-750dad2c8d3d-kube-api-access-8txnr" (OuterVolumeSpecName: "kube-api-access-8txnr") pod "58402002-2af2-4dc9-a380-750dad2c8d3d" (UID: "58402002-2af2-4dc9-a380-750dad2c8d3d"). InnerVolumeSpecName "kube-api-access-8txnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 23:06:50.837015 systemd[1]: var-lib-kubelet-pods-58402002\x2d2af2\x2d4dc9\x2da380\x2d750dad2c8d3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8txnr.mount: Deactivated successfully. Sep 12 23:06:50.837129 systemd[1]: var-lib-kubelet-pods-58402002\x2d2af2\x2d4dc9\x2da380\x2d750dad2c8d3d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 23:06:50.839607 containerd[1580]: time="2025-09-12T23:06:50.839554464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c5cf69d5-gn9vn,Uid:12d78903-318f-42a6-b332-db30891886a1,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:50.858056 kubelet[2777]: I0912 23:06:50.858005 2777 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 23:06:50.858056 kubelet[2777]: I0912 23:06:50.858048 2777 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8txnr\" (UniqueName: \"kubernetes.io/projected/58402002-2af2-4dc9-a380-750dad2c8d3d-kube-api-access-8txnr\") on node \"localhost\" DevicePath \"\"" Sep 12 23:06:50.858056 kubelet[2777]: I0912 23:06:50.858061 2777 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58402002-2af2-4dc9-a380-750dad2c8d3d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 23:06:51.026277 systemd-networkd[1498]: calid01abcb6dd8: Link UP Sep 12 23:06:51.026566 systemd-networkd[1498]: calid01abcb6dd8: Gained carrier Sep 12 23:06:51.046755 containerd[1580]: 2025-09-12 23:06:50.869 [INFO][3870] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:06:51.046755 containerd[1580]: 2025-09-12 23:06:50.903 [INFO][3870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0 calico-kube-controllers-5c5cf69d5- calico-system 12d78903-318f-42a6-b332-db30891886a1 882 0 2025-09-12 23:06:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c5cf69d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c5cf69d5-gn9vn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid01abcb6dd8 [] [] }} ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-" Sep 12 23:06:51.046755 containerd[1580]: 2025-09-12 23:06:50.904 [INFO][3870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" Sep 12 23:06:51.046755 containerd[1580]: 2025-09-12 23:06:50.973 [INFO][3883] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" HandleID="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Workload="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.974 [INFO][3883] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" HandleID="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Workload="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004356c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c5cf69d5-gn9vn", "timestamp":"2025-09-12 23:06:50.973737318 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.974 [INFO][3883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.974 [INFO][3883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.974 [INFO][3883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.981 [INFO][3883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" host="localhost" Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.987 [INFO][3883] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.991 [INFO][3883] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.994 [INFO][3883] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.996 [INFO][3883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:51.047169 containerd[1580]: 2025-09-12 23:06:50.996 [INFO][3883] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" host="localhost" Sep 12 23:06:51.047471 containerd[1580]: 2025-09-12 23:06:50.997 [INFO][3883] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f Sep 12 23:06:51.047471 containerd[1580]: 2025-09-12 23:06:51.001 [INFO][3883] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" host="localhost" Sep 12 23:06:51.047471 containerd[1580]: 2025-09-12 23:06:51.010 [INFO][3883] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" host="localhost" Sep 12 23:06:51.047471 containerd[1580]: 2025-09-12 23:06:51.010 [INFO][3883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" host="localhost" Sep 12 23:06:51.047471 containerd[1580]: 2025-09-12 23:06:51.010 [INFO][3883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:51.047471 containerd[1580]: 2025-09-12 23:06:51.011 [INFO][3883] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" HandleID="k8s-pod-network.141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Workload="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" Sep 12 23:06:51.047645 containerd[1580]: 2025-09-12 23:06:51.014 [INFO][3870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0", GenerateName:"calico-kube-controllers-5c5cf69d5-", Namespace:"calico-system", SelfLink:"", UID:"12d78903-318f-42a6-b332-db30891886a1", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c5cf69d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c5cf69d5-gn9vn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid01abcb6dd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:51.047729 containerd[1580]: 2025-09-12 23:06:51.015 [INFO][3870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" Sep 12 23:06:51.047729 containerd[1580]: 2025-09-12 23:06:51.015 [INFO][3870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid01abcb6dd8 ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" Sep 12 23:06:51.047729 containerd[1580]: 2025-09-12 23:06:51.026 [INFO][3870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" Sep 12 23:06:51.047832 containerd[1580]: 2025-09-12 23:06:51.028 [INFO][3870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0", GenerateName:"calico-kube-controllers-5c5cf69d5-", Namespace:"calico-system", SelfLink:"", UID:"12d78903-318f-42a6-b332-db30891886a1", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c5cf69d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f", Pod:"calico-kube-controllers-5c5cf69d5-gn9vn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid01abcb6dd8", MAC:"c2:47:cb:ad:bc:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:51.047931 containerd[1580]: 2025-09-12 23:06:51.042 [INFO][3870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" Namespace="calico-system" Pod="calico-kube-controllers-5c5cf69d5-gn9vn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c5cf69d5--gn9vn-eth0" Sep 12 23:06:51.083787 systemd[1]: Removed slice kubepods-besteffort-pod58402002_2af2_4dc9_a380_750dad2c8d3d.slice - libcontainer container kubepods-besteffort-pod58402002_2af2_4dc9_a380_750dad2c8d3d.slice. Sep 12 23:06:51.282677 kubelet[2777]: I0912 23:06:51.282317 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hzlf2" podStartSLOduration=2.715592312 podStartE2EDuration="26.282289033s" podCreationTimestamp="2025-09-12 23:06:25 +0000 UTC" firstStartedPulling="2025-09-12 23:06:26.264610556 +0000 UTC m=+20.532641009" lastFinishedPulling="2025-09-12 23:06:49.831307277 +0000 UTC m=+44.099337730" observedRunningTime="2025-09-12 23:06:51.281830583 +0000 UTC m=+45.549861026" watchObservedRunningTime="2025-09-12 23:06:51.282289033 +0000 UTC m=+45.550319486" Sep 12 23:06:51.835044 systemd[1]: Created slice kubepods-besteffort-poda1b13b31_d82b_4b42_9e09_0d5ccf0e63d2.slice - libcontainer container kubepods-besteffort-poda1b13b31_d82b_4b42_9e09_0d5ccf0e63d2.slice. Sep 12 23:06:51.847267 kubelet[2777]: I0912 23:06:51.846971 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58402002-2af2-4dc9-a380-750dad2c8d3d" path="/var/lib/kubelet/pods/58402002-2af2-4dc9-a380-750dad2c8d3d/volumes" Sep 12 23:06:51.866149 kubelet[2777]: I0912 23:06:51.865892 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2-whisker-backend-key-pair\") pod \"whisker-6cb96cb9c4-s2xwd\" (UID: \"a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2\") " pod="calico-system/whisker-6cb96cb9c4-s2xwd" Sep 12 23:06:51.866321 kubelet[2777]: I0912 23:06:51.866263 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2-whisker-ca-bundle\") pod \"whisker-6cb96cb9c4-s2xwd\" (UID: \"a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2\") " pod="calico-system/whisker-6cb96cb9c4-s2xwd" Sep 12 23:06:51.867037 kubelet[2777]: I0912 23:06:51.866999 2777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz28x\" (UniqueName: \"kubernetes.io/projected/a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2-kube-api-access-lz28x\") pod \"whisker-6cb96cb9c4-s2xwd\" (UID: \"a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2\") " pod="calico-system/whisker-6cb96cb9c4-s2xwd" Sep 12 23:06:52.076185 containerd[1580]: time="2025-09-12T23:06:52.076127907Z" level=info msg="connecting to shim 141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f" address="unix:///run/containerd/s/fd2b248cc9d0340d67305ca2339590b547772f3fe11a6497952c36d6b849d725" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:52.141035 systemd[1]: Started cri-containerd-141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f.scope - libcontainer container 141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f. Sep 12 23:06:52.143048 containerd[1580]: time="2025-09-12T23:06:52.142992020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cb96cb9c4-s2xwd,Uid:a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:52.157463 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:52.229939 containerd[1580]: time="2025-09-12T23:06:52.229758003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c5cf69d5-gn9vn,Uid:12d78903-318f-42a6-b332-db30891886a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f\"" Sep 12 23:06:52.237167 containerd[1580]: time="2025-09-12T23:06:52.237062911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 23:06:52.357149 systemd-networkd[1498]: calid01abcb6dd8: Gained IPv6LL Sep 12 23:06:52.410522 systemd-networkd[1498]: cali1f54c412d79: Link UP Sep 12 23:06:52.413053 systemd-networkd[1498]: cali1f54c412d79: Gained carrier Sep 12 23:06:52.433600 containerd[1580]: 2025-09-12 23:06:52.221 [INFO][4069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0 whisker-6cb96cb9c4- calico-system a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2 958 0 2025-09-12 23:06:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cb96cb9c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6cb96cb9c4-s2xwd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1f54c412d79 [] [] }} ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-" Sep 12 23:06:52.433600 containerd[1580]: 2025-09-12 23:06:52.221 [INFO][4069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" Sep 12 23:06:52.433600 containerd[1580]: 2025-09-12 23:06:52.259 [INFO][4089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" HandleID="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Workload="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.261 [INFO][4089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" HandleID="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Workload="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000120500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6cb96cb9c4-s2xwd", "timestamp":"2025-09-12 23:06:52.259922572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.261 [INFO][4089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.261 [INFO][4089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.261 [INFO][4089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.358 [INFO][4089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" host="localhost" Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.367 [INFO][4089] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.372 [INFO][4089] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.375 [INFO][4089] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.378 [INFO][4089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:52.434097 containerd[1580]: 2025-09-12 23:06:52.378 [INFO][4089] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" host="localhost" Sep 12 23:06:52.434338 containerd[1580]: 2025-09-12 23:06:52.389 [INFO][4089] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb Sep 12 23:06:52.434338 containerd[1580]: 2025-09-12 23:06:52.395 [INFO][4089] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" host="localhost" Sep 12 23:06:52.434338 containerd[1580]: 2025-09-12 23:06:52.402 [INFO][4089] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" host="localhost" Sep 12 23:06:52.434338 containerd[1580]: 2025-09-12 23:06:52.402 [INFO][4089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" host="localhost" Sep 12 23:06:52.434338 containerd[1580]: 2025-09-12 23:06:52.402 [INFO][4089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:52.434338 containerd[1580]: 2025-09-12 23:06:52.402 [INFO][4089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" HandleID="k8s-pod-network.146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Workload="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" Sep 12 23:06:52.434467 containerd[1580]: 2025-09-12 23:06:52.408 [INFO][4069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0", GenerateName:"whisker-6cb96cb9c4-", Namespace:"calico-system", SelfLink:"", UID:"a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cb96cb9c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6cb96cb9c4-s2xwd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f54c412d79", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:52.434467 containerd[1580]: 2025-09-12 23:06:52.408 [INFO][4069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" Sep 12 23:06:52.434540 containerd[1580]: 2025-09-12 23:06:52.408 [INFO][4069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f54c412d79 ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" Sep 12 23:06:52.434540 containerd[1580]: 2025-09-12 23:06:52.411 [INFO][4069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" Sep 12 23:06:52.434582 containerd[1580]: 2025-09-12 23:06:52.412 [INFO][4069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0", GenerateName:"whisker-6cb96cb9c4-", Namespace:"calico-system", SelfLink:"", UID:"a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cb96cb9c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb", Pod:"whisker-6cb96cb9c4-s2xwd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f54c412d79", MAC:"2a:89:85:80:37:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:52.434632 containerd[1580]: 2025-09-12 23:06:52.426 [INFO][4069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" Namespace="calico-system" Pod="whisker-6cb96cb9c4-s2xwd" WorkloadEndpoint="localhost-k8s-whisker--6cb96cb9c4--s2xwd-eth0" Sep 12 23:06:52.447721 systemd-networkd[1498]: vxlan.calico: Link UP Sep 12 23:06:52.447874 systemd-networkd[1498]: vxlan.calico: Gained carrier Sep 12 23:06:52.467042 containerd[1580]: time="2025-09-12T23:06:52.466772088Z" level=info msg="connecting to shim 146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb" address="unix:///run/containerd/s/329c2df5b3c1c4de40c040a5dbb9739bb8959f161032cd052d2683c29b215ec2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:52.499267 systemd[1]: Started cri-containerd-146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb.scope - libcontainer container 146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb. Sep 12 23:06:52.518180 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:52.676226 containerd[1580]: time="2025-09-12T23:06:52.676066011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cb96cb9c4-s2xwd,Uid:a1b13b31-d82b-4b42-9e09-0d5ccf0e63d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb\"" Sep 12 23:06:53.837203 containerd[1580]: time="2025-09-12T23:06:53.837129152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-f7sh6,Uid:53c129ce-c8b6-4421-891f-027eaa23117b,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:06:53.952196 systemd-networkd[1498]: calidb119cc944f: Link UP Sep 12 23:06:53.952811 systemd-networkd[1498]: calidb119cc944f: Gained carrier Sep 12 23:06:53.970560 containerd[1580]: 2025-09-12 23:06:53.879 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0 calico-apiserver-7dc657dddb- calico-apiserver 53c129ce-c8b6-4421-891f-027eaa23117b 884 0 2025-09-12 23:06:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc657dddb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dc657dddb-f7sh6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidb119cc944f [] [] }} ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-" Sep 12 23:06:53.970560 containerd[1580]: 2025-09-12 23:06:53.879 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" Sep 12 23:06:53.970560 containerd[1580]: 2025-09-12 23:06:53.905 [INFO][4242] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" HandleID="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Workload="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.905 [INFO][4242] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" HandleID="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Workload="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dc657dddb-f7sh6", "timestamp":"2025-09-12 23:06:53.905183775 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.905 [INFO][4242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.905 [INFO][4242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.905 [INFO][4242] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.913 [INFO][4242] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" host="localhost" Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.918 [INFO][4242] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.924 [INFO][4242] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.927 [INFO][4242] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.930 [INFO][4242] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:53.971083 containerd[1580]: 2025-09-12 23:06:53.930 [INFO][4242] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" host="localhost" Sep 12 23:06:53.971399 containerd[1580]: 2025-09-12 23:06:53.933 [INFO][4242] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed Sep 12 23:06:53.971399 containerd[1580]: 2025-09-12 23:06:53.938 [INFO][4242] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" host="localhost" Sep 12 23:06:53.971399 containerd[1580]: 2025-09-12 23:06:53.946 [INFO][4242] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" host="localhost" Sep 12 23:06:53.971399 containerd[1580]: 2025-09-12 23:06:53.946 [INFO][4242] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" host="localhost" Sep 12 23:06:53.971399 containerd[1580]: 2025-09-12 23:06:53.946 [INFO][4242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:53.971399 containerd[1580]: 2025-09-12 23:06:53.946 [INFO][4242] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" HandleID="k8s-pod-network.9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Workload="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" Sep 12 23:06:53.971584 containerd[1580]: 2025-09-12 23:06:53.949 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0", GenerateName:"calico-apiserver-7dc657dddb-", Namespace:"calico-apiserver", SelfLink:"", UID:"53c129ce-c8b6-4421-891f-027eaa23117b", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc657dddb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dc657dddb-f7sh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb119cc944f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:53.971689 containerd[1580]: 2025-09-12 23:06:53.950 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" Sep 12 23:06:53.971689 containerd[1580]: 2025-09-12 23:06:53.950 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb119cc944f ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" Sep 12 23:06:53.971689 containerd[1580]: 2025-09-12 23:06:53.953 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" Sep 12 23:06:53.971793 containerd[1580]: 2025-09-12 23:06:53.954 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0", GenerateName:"calico-apiserver-7dc657dddb-", Namespace:"calico-apiserver", SelfLink:"", UID:"53c129ce-c8b6-4421-891f-027eaa23117b", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc657dddb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed", Pod:"calico-apiserver-7dc657dddb-f7sh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidb119cc944f", MAC:"aa:33:87:23:d4:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:53.971937 containerd[1580]: 2025-09-12 23:06:53.966 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-f7sh6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--f7sh6-eth0" Sep 12 23:06:54.008780 containerd[1580]: time="2025-09-12T23:06:54.008400960Z" level=info msg="connecting to shim 9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed" address="unix:///run/containerd/s/0de501250e66d1de58cd6927e5e1ef5dc7416439db2f5c7e124918fef500d93e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:54.046093 systemd[1]: Started cri-containerd-9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed.scope - libcontainer container 9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed. Sep 12 23:06:54.061917 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:54.084105 systemd-networkd[1498]: vxlan.calico: Gained IPv6LL Sep 12 23:06:54.099417 containerd[1580]: time="2025-09-12T23:06:54.099183809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-f7sh6,Uid:53c129ce-c8b6-4421-891f-027eaa23117b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed\"" Sep 12 23:06:54.148132 systemd-networkd[1498]: cali1f54c412d79: Gained IPv6LL Sep 12 23:06:54.836544 kubelet[2777]: E0912 23:06:54.836481 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:54.837247 containerd[1580]: time="2025-09-12T23:06:54.836987868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7r256,Uid:f8d1365e-c840-4eb2-b77c-dd3f9f92d921,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:55.223247 systemd-networkd[1498]: cali06c8f3df40c: Link UP Sep 12 23:06:55.224065 systemd-networkd[1498]: cali06c8f3df40c: Gained carrier Sep 12 23:06:55.242905 containerd[1580]: 2025-09-12 23:06:55.134 [INFO][4305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--7r256-eth0 coredns-7c65d6cfc9- kube-system f8d1365e-c840-4eb2-b77c-dd3f9f92d921 873 0 2025-09-12 23:06:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-7r256 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali06c8f3df40c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-" Sep 12 23:06:55.242905 containerd[1580]: 2025-09-12 23:06:55.134 [INFO][4305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" Sep 12 23:06:55.242905 containerd[1580]: 2025-09-12 23:06:55.166 [INFO][4324] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" HandleID="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Workload="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.166 [INFO][4324] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" HandleID="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Workload="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a2ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-7r256", "timestamp":"2025-09-12 23:06:55.166323633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.166 [INFO][4324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.166 [INFO][4324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.166 [INFO][4324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.175 [INFO][4324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" host="localhost" Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.182 [INFO][4324] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.188 [INFO][4324] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.190 [INFO][4324] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.194 [INFO][4324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:55.243235 containerd[1580]: 2025-09-12 23:06:55.194 [INFO][4324] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" host="localhost" Sep 12 23:06:55.243538 containerd[1580]: 2025-09-12 23:06:55.198 [INFO][4324] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115 Sep 12 23:06:55.243538 containerd[1580]: 2025-09-12 23:06:55.207 [INFO][4324] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" host="localhost" Sep 12 23:06:55.243538 containerd[1580]: 2025-09-12 23:06:55.215 [INFO][4324] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" host="localhost" Sep 12 23:06:55.243538 containerd[1580]: 2025-09-12 23:06:55.215 [INFO][4324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" host="localhost" Sep 12 23:06:55.243538 containerd[1580]: 2025-09-12 23:06:55.215 [INFO][4324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:55.243538 containerd[1580]: 2025-09-12 23:06:55.215 [INFO][4324] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" HandleID="k8s-pod-network.75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Workload="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" Sep 12 23:06:55.243776 containerd[1580]: 2025-09-12 23:06:55.220 [INFO][4305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7r256-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f8d1365e-c840-4eb2-b77c-dd3f9f92d921", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-7r256", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06c8f3df40c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:55.243904 containerd[1580]: 2025-09-12 23:06:55.220 [INFO][4305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" Sep 12 23:06:55.243904 containerd[1580]: 2025-09-12 23:06:55.220 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06c8f3df40c ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" Sep 12 23:06:55.243904 containerd[1580]: 2025-09-12 23:06:55.224 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" Sep 12 23:06:55.244001 containerd[1580]: 2025-09-12 23:06:55.224 [INFO][4305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--7r256-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f8d1365e-c840-4eb2-b77c-dd3f9f92d921", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115", Pod:"coredns-7c65d6cfc9-7r256", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06c8f3df40c", MAC:"12:cd:fb:54:de:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:55.244001 containerd[1580]: 2025-09-12 23:06:55.238 [INFO][4305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" Namespace="kube-system" Pod="coredns-7c65d6cfc9-7r256" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--7r256-eth0" Sep 12 23:06:55.280647 containerd[1580]: time="2025-09-12T23:06:55.280521416Z" level=info msg="connecting to shim 75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115" address="unix:///run/containerd/s/5be77ea8bb59a25eee9845342b84fbc95600b70fa23e4b24ea00e256d3856cfb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:55.349397 systemd[1]: Started cri-containerd-75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115.scope - libcontainer container 75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115. Sep 12 23:06:55.377568 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:55.415220 containerd[1580]: time="2025-09-12T23:06:55.415161432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7r256,Uid:f8d1365e-c840-4eb2-b77c-dd3f9f92d921,Namespace:kube-system,Attempt:0,} returns sandbox id \"75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115\"" Sep 12 23:06:55.416221 kubelet[2777]: E0912 23:06:55.416030 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:55.427493 containerd[1580]: time="2025-09-12T23:06:55.427443173Z" level=info msg="CreateContainer within sandbox \"75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:06:55.581196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104401734.mount: Deactivated successfully. Sep 12 23:06:55.596886 containerd[1580]: time="2025-09-12T23:06:55.596624117Z" level=info msg="Container 458c5ef5666114ce7402612ddcd3aa4fb6f42db146a7f181e351bd3dc24a5724: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:55.837908 kubelet[2777]: E0912 23:06:55.837011 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:55.838486 containerd[1580]: time="2025-09-12T23:06:55.837617503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rn62,Uid:366b8825-9aaa-42c1-b70d-ae14ae3ca227,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:55.838486 containerd[1580]: time="2025-09-12T23:06:55.837789044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-t9mck,Uid:3280d612-04da-45ba-8f1d-ca150c949632,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:06:55.838486 containerd[1580]: time="2025-09-12T23:06:55.837892434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrdr9,Uid:ec887171-1133-4f74-8a61-d4663af982e5,Namespace:kube-system,Attempt:0,}" Sep 12 23:06:55.838486 containerd[1580]: time="2025-09-12T23:06:55.837927211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2n8tf,Uid:42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd,Namespace:calico-system,Attempt:0,}" Sep 12 23:06:55.876098 systemd-networkd[1498]: calidb119cc944f: Gained IPv6LL Sep 12 23:06:56.025883 containerd[1580]: time="2025-09-12T23:06:56.025763015Z" level=info msg="CreateContainer within sandbox \"75f0bd8d94f194468ab499f14ba081c6f2e5d2b33b4c78eb4f02269657ea9115\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"458c5ef5666114ce7402612ddcd3aa4fb6f42db146a7f181e351bd3dc24a5724\"" Sep 12 23:06:56.026546 containerd[1580]: time="2025-09-12T23:06:56.026483217Z" level=info msg="StartContainer for \"458c5ef5666114ce7402612ddcd3aa4fb6f42db146a7f181e351bd3dc24a5724\"" Sep 12 23:06:56.027808 containerd[1580]: time="2025-09-12T23:06:56.027780112Z" level=info msg="connecting to shim 458c5ef5666114ce7402612ddcd3aa4fb6f42db146a7f181e351bd3dc24a5724" address="unix:///run/containerd/s/5be77ea8bb59a25eee9845342b84fbc95600b70fa23e4b24ea00e256d3856cfb" protocol=ttrpc version=3 Sep 12 23:06:56.035814 containerd[1580]: time="2025-09-12T23:06:56.035056996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:56.037536 containerd[1580]: time="2025-09-12T23:06:56.037474605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 23:06:56.041118 containerd[1580]: time="2025-09-12T23:06:56.041086542Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:56.044173 containerd[1580]: time="2025-09-12T23:06:56.044145801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:56.045207 containerd[1580]: time="2025-09-12T23:06:56.044845261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.807402303s" Sep 12 23:06:56.045562 containerd[1580]: time="2025-09-12T23:06:56.045288798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 23:06:56.047906 containerd[1580]: time="2025-09-12T23:06:56.047884061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 23:06:56.057282 containerd[1580]: time="2025-09-12T23:06:56.057238888Z" level=info msg="CreateContainer within sandbox \"141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 23:06:56.075197 systemd[1]: Started cri-containerd-458c5ef5666114ce7402612ddcd3aa4fb6f42db146a7f181e351bd3dc24a5724.scope - libcontainer container 458c5ef5666114ce7402612ddcd3aa4fb6f42db146a7f181e351bd3dc24a5724. Sep 12 23:06:56.081545 containerd[1580]: time="2025-09-12T23:06:56.081483726Z" level=info msg="Container 545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:56.096877 containerd[1580]: time="2025-09-12T23:06:56.096647773Z" level=info msg="CreateContainer within sandbox \"141c09651024cd8f571032f2398128b0cf9920f49e433034f3216742a56cfb3f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\"" Sep 12 23:06:56.097807 containerd[1580]: time="2025-09-12T23:06:56.097776113Z" level=info msg="StartContainer for \"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\"" Sep 12 23:06:56.099195 containerd[1580]: time="2025-09-12T23:06:56.099159043Z" level=info msg="connecting to shim 545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13" address="unix:///run/containerd/s/fd2b248cc9d0340d67305ca2339590b547772f3fe11a6497952c36d6b849d725" protocol=ttrpc version=3 Sep 12 23:06:56.137365 systemd[1]: Started cri-containerd-545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13.scope - libcontainer container 545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13. Sep 12 23:06:56.193841 containerd[1580]: time="2025-09-12T23:06:56.193749447Z" level=info msg="StartContainer for \"458c5ef5666114ce7402612ddcd3aa4fb6f42db146a7f181e351bd3dc24a5724\" returns successfully" Sep 12 23:06:56.260334 containerd[1580]: time="2025-09-12T23:06:56.260068957Z" level=info msg="StartContainer for \"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\" returns successfully" Sep 12 23:06:56.276673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757450513.mount: Deactivated successfully. Sep 12 23:06:56.309265 systemd-networkd[1498]: calibee2d79ce8e: Link UP Sep 12 23:06:56.315366 systemd-networkd[1498]: calibee2d79ce8e: Gained carrier Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.080 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2rn62-eth0 csi-node-driver- calico-system 366b8825-9aaa-42c1-b70d-ae14ae3ca227 751 0 2025-09-12 23:06:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2rn62 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibee2d79ce8e [] [] }} ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.081 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-eth0" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.186 [INFO][4472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" HandleID="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Workload="localhost-k8s-csi--node--driver--2rn62-eth0" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.187 [INFO][4472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" HandleID="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Workload="localhost-k8s-csi--node--driver--2rn62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035fb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2rn62", "timestamp":"2025-09-12 23:06:56.186113328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.187 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.187 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.187 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.222 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.234 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.248 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.253 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.258 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.258 [INFO][4472] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.262 [INFO][4472] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431 Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.281 [INFO][4472] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.294 [INFO][4472] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.294 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" host="localhost" Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.294 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:56.344675 containerd[1580]: 2025-09-12 23:06:56.294 [INFO][4472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" HandleID="k8s-pod-network.35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Workload="localhost-k8s-csi--node--driver--2rn62-eth0" Sep 12 23:06:56.346194 containerd[1580]: 2025-09-12 23:06:56.298 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2rn62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"366b8825-9aaa-42c1-b70d-ae14ae3ca227", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2rn62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibee2d79ce8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.346194 containerd[1580]: 2025-09-12 23:06:56.298 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-eth0" Sep 12 23:06:56.346194 containerd[1580]: 2025-09-12 23:06:56.298 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibee2d79ce8e ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-eth0" Sep 12 23:06:56.346194 containerd[1580]: 2025-09-12 23:06:56.318 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-eth0" Sep 12 23:06:56.346194 containerd[1580]: 2025-09-12 23:06:56.320 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2rn62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"366b8825-9aaa-42c1-b70d-ae14ae3ca227", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431", Pod:"csi-node-driver-2rn62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibee2d79ce8e", MAC:"4e:d6:d5:7b:0b:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.346194 containerd[1580]: 2025-09-12 23:06:56.338 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" Namespace="calico-system" Pod="csi-node-driver-2rn62" WorkloadEndpoint="localhost-k8s-csi--node--driver--2rn62-eth0" Sep 12 23:06:56.388116 systemd-networkd[1498]: cali06c8f3df40c: Gained IPv6LL Sep 12 23:06:56.391988 containerd[1580]: time="2025-09-12T23:06:56.391874117Z" level=info msg="connecting to shim 35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431" address="unix:///run/containerd/s/f9c5d5246f4012bc22d304f16f1f1944b10bda6f0e387c73ee501762dbff851b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:56.409001 systemd-networkd[1498]: cali8e3e7d8a1ef: Link UP Sep 12 23:06:56.410236 systemd-networkd[1498]: cali8e3e7d8a1ef: Gained carrier Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.109 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0 calico-apiserver-7dc657dddb- calico-apiserver 3280d612-04da-45ba-8f1d-ca150c949632 878 0 2025-09-12 23:06:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc657dddb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7dc657dddb-t9mck eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8e3e7d8a1ef [] [] }} ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.109 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.236 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" HandleID="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Workload="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.236 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" HandleID="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Workload="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7dc657dddb-t9mck", "timestamp":"2025-09-12 23:06:56.236093531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.236 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.294 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.294 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.322 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.332 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.353 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.358 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.366 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.366 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.369 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.376 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.389 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.389 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" host="localhost" Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.390 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:56.440728 containerd[1580]: 2025-09-12 23:06:56.390 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" HandleID="k8s-pod-network.dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Workload="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" Sep 12 23:06:56.441667 containerd[1580]: 2025-09-12 23:06:56.402 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0", GenerateName:"calico-apiserver-7dc657dddb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3280d612-04da-45ba-8f1d-ca150c949632", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc657dddb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7dc657dddb-t9mck", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e3e7d8a1ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.441667 containerd[1580]: 2025-09-12 23:06:56.403 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" Sep 12 23:06:56.441667 containerd[1580]: 2025-09-12 23:06:56.404 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e3e7d8a1ef ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" Sep 12 23:06:56.441667 containerd[1580]: 2025-09-12 23:06:56.411 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" Sep 12 23:06:56.441667 containerd[1580]: 2025-09-12 23:06:56.411 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0", GenerateName:"calico-apiserver-7dc657dddb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3280d612-04da-45ba-8f1d-ca150c949632", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc657dddb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d", Pod:"calico-apiserver-7dc657dddb-t9mck", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8e3e7d8a1ef", MAC:"b6:fe:2f:ff:7f:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.441667 containerd[1580]: 2025-09-12 23:06:56.429 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc657dddb-t9mck" WorkloadEndpoint="localhost-k8s-calico--apiserver--7dc657dddb--t9mck-eth0" Sep 12 23:06:56.445035 systemd[1]: Started cri-containerd-35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431.scope - libcontainer container 35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431. Sep 12 23:06:56.461764 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:56.482374 containerd[1580]: time="2025-09-12T23:06:56.482312972Z" level=info msg="connecting to shim dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d" address="unix:///run/containerd/s/7ec99951c825cfa245bccc1b2d34a506b69d1c6e97e6c31880549e1d4456216a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:56.488542 containerd[1580]: time="2025-09-12T23:06:56.488430788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rn62,Uid:366b8825-9aaa-42c1-b70d-ae14ae3ca227,Namespace:calico-system,Attempt:0,} returns sandbox id \"35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431\"" Sep 12 23:06:56.503386 systemd-networkd[1498]: cali186defea178: Link UP Sep 12 23:06:56.504049 systemd-networkd[1498]: cali186defea178: Gained carrier Sep 12 23:06:56.521107 systemd[1]: Started cri-containerd-dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d.scope - libcontainer container dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d. Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.147 [INFO][4418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--2n8tf-eth0 goldmane-7988f88666- calico-system 42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd 883 0 2025-09-12 23:06:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-2n8tf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali186defea178 [] [] }} ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.147 [INFO][4418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.246 [INFO][4521] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" HandleID="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Workload="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.246 [INFO][4521] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" HandleID="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Workload="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003548b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-2n8tf", "timestamp":"2025-09-12 23:06:56.246498146 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.251 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.393 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.393 [INFO][4521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.422 [INFO][4521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.434 [INFO][4521] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.450 [INFO][4521] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.463 [INFO][4521] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.470 [INFO][4521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.470 [INFO][4521] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.471 [INFO][4521] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1 Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.477 [INFO][4521] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.488 [INFO][4521] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.488 [INFO][4521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" host="localhost" Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.488 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:56.528977 containerd[1580]: 2025-09-12 23:06:56.488 [INFO][4521] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" HandleID="k8s-pod-network.3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Workload="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" Sep 12 23:06:56.530299 containerd[1580]: 2025-09-12 23:06:56.494 [INFO][4418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--2n8tf-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-2n8tf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali186defea178", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.530299 containerd[1580]: 2025-09-12 23:06:56.494 [INFO][4418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" Sep 12 23:06:56.530299 containerd[1580]: 2025-09-12 23:06:56.494 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali186defea178 ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" Sep 12 23:06:56.530299 containerd[1580]: 2025-09-12 23:06:56.508 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" Sep 12 23:06:56.530299 containerd[1580]: 2025-09-12 23:06:56.509 [INFO][4418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--2n8tf-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1", Pod:"goldmane-7988f88666-2n8tf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali186defea178", MAC:"16:63:db:f0:e4:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.530299 containerd[1580]: 2025-09-12 23:06:56.520 [INFO][4418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" Namespace="calico-system" Pod="goldmane-7988f88666-2n8tf" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--2n8tf-eth0" Sep 12 23:06:56.552953 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:56.577359 systemd-networkd[1498]: califd900f76763: Link UP Sep 12 23:06:56.579223 systemd-networkd[1498]: califd900f76763: Gained carrier Sep 12 23:06:56.582843 containerd[1580]: time="2025-09-12T23:06:56.582792420Z" level=info msg="connecting to shim 3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1" address="unix:///run/containerd/s/ce911b74ebd3fb2ed56b6ba527a1b3f68c24bf11260f1594199a09f5f482a4dd" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.157 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0 coredns-7c65d6cfc9- kube-system ec887171-1133-4f74-8a61-d4663af982e5 881 0 2025-09-12 23:06:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wrdr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd900f76763 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.158 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.279 [INFO][4520] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" HandleID="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Workload="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.281 [INFO][4520] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" HandleID="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Workload="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e980), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wrdr9", "timestamp":"2025-09-12 23:06:56.27961049 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.281 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.488 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.488 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.523 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.538 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.546 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.548 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.552 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.552 [INFO][4520] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.554 [INFO][4520] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.558 [INFO][4520] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.565 [INFO][4520] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.565 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" host="localhost" Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.565 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:06:56.596357 containerd[1580]: 2025-09-12 23:06:56.565 [INFO][4520] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" HandleID="k8s-pod-network.57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Workload="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" Sep 12 23:06:56.597424 containerd[1580]: 2025-09-12 23:06:56.571 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ec887171-1133-4f74-8a61-d4663af982e5", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wrdr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd900f76763", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.597424 containerd[1580]: 2025-09-12 23:06:56.571 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" Sep 12 23:06:56.597424 containerd[1580]: 2025-09-12 23:06:56.571 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd900f76763 ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" Sep 12 23:06:56.597424 containerd[1580]: 2025-09-12 23:06:56.580 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" Sep 12 23:06:56.597424 containerd[1580]: 2025-09-12 23:06:56.581 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ec887171-1133-4f74-8a61-d4663af982e5", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 6, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c", Pod:"coredns-7c65d6cfc9-wrdr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd900f76763", MAC:"de:1d:64:26:5d:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:06:56.597424 containerd[1580]: 2025-09-12 23:06:56.592 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wrdr9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wrdr9-eth0" Sep 12 23:06:56.604488 containerd[1580]: time="2025-09-12T23:06:56.604427335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc657dddb-t9mck,Uid:3280d612-04da-45ba-8f1d-ca150c949632,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d\"" Sep 12 23:06:56.622031 systemd[1]: Started cri-containerd-3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1.scope - libcontainer container 3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1. Sep 12 23:06:56.628125 containerd[1580]: time="2025-09-12T23:06:56.628078736Z" level=info msg="connecting to shim 57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c" address="unix:///run/containerd/s/ce4d7fefeb94d2bfdc98e6fa0f1c9e4be00c4068742d39dc16b3a475c13caeb4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:06:56.644561 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:56.667223 systemd[1]: Started cri-containerd-57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c.scope - libcontainer container 57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c. Sep 12 23:06:56.680905 containerd[1580]: time="2025-09-12T23:06:56.680836833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-2n8tf,Uid:42a72dd3-c9f6-4f6d-9f67-e39fa8f8eadd,Namespace:calico-system,Attempt:0,} returns sandbox id \"3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1\"" Sep 12 23:06:56.684474 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:06:56.718131 containerd[1580]: time="2025-09-12T23:06:56.718079103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wrdr9,Uid:ec887171-1133-4f74-8a61-d4663af982e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c\"" Sep 12 23:06:56.719140 kubelet[2777]: E0912 23:06:56.719103 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:56.721511 containerd[1580]: time="2025-09-12T23:06:56.721479030Z" level=info msg="CreateContainer within sandbox \"57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:06:56.733411 containerd[1580]: time="2025-09-12T23:06:56.733359456Z" level=info msg="Container b4157362be88821fc626941ff63c063672fcaef71b30155a573e0144d9cbfa70: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:56.742088 containerd[1580]: time="2025-09-12T23:06:56.742044260Z" level=info msg="CreateContainer within sandbox \"57cebc7b4c7b983db4e369ef7bd74f0b040078f426c3b27728cb8ce08fdfe27c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4157362be88821fc626941ff63c063672fcaef71b30155a573e0144d9cbfa70\"" Sep 12 23:06:56.742982 containerd[1580]: time="2025-09-12T23:06:56.742606376Z" level=info msg="StartContainer for \"b4157362be88821fc626941ff63c063672fcaef71b30155a573e0144d9cbfa70\"" Sep 12 23:06:56.743611 containerd[1580]: time="2025-09-12T23:06:56.743589555Z" level=info msg="connecting to shim b4157362be88821fc626941ff63c063672fcaef71b30155a573e0144d9cbfa70" address="unix:///run/containerd/s/ce4d7fefeb94d2bfdc98e6fa0f1c9e4be00c4068742d39dc16b3a475c13caeb4" protocol=ttrpc version=3 Sep 12 23:06:56.772383 systemd[1]: Started cri-containerd-b4157362be88821fc626941ff63c063672fcaef71b30155a573e0144d9cbfa70.scope - libcontainer container b4157362be88821fc626941ff63c063672fcaef71b30155a573e0144d9cbfa70. Sep 12 23:06:56.814252 containerd[1580]: time="2025-09-12T23:06:56.814203720Z" level=info msg="StartContainer for \"b4157362be88821fc626941ff63c063672fcaef71b30155a573e0144d9cbfa70\" returns successfully" Sep 12 23:06:57.112461 kubelet[2777]: E0912 23:06:57.112423 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:57.121431 kubelet[2777]: E0912 23:06:57.121406 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:57.279016 containerd[1580]: time="2025-09-12T23:06:57.278969762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\" id:\"4dbef15956d0657ea6339790b6135fafce4c64329e96ef7b6bef30a7733118a1\" pid:4842 exited_at:{seconds:1757718417 nanos:278208133}" Sep 12 23:06:57.302877 kubelet[2777]: I0912 23:06:57.302659 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7r256" podStartSLOduration=45.302634753 podStartE2EDuration="45.302634753s" podCreationTimestamp="2025-09-12 23:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:06:57.276099534 +0000 UTC m=+51.544130067" watchObservedRunningTime="2025-09-12 23:06:57.302634753 +0000 UTC m=+51.570665206" Sep 12 23:06:57.303424 kubelet[2777]: I0912 23:06:57.303347 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wrdr9" podStartSLOduration=46.303337059 podStartE2EDuration="46.303337059s" podCreationTimestamp="2025-09-12 23:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:06:57.303158404 +0000 UTC m=+51.571188867" watchObservedRunningTime="2025-09-12 23:06:57.303337059 +0000 UTC m=+51.571367512" Sep 12 23:06:57.460984 kubelet[2777]: I0912 23:06:57.460740 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c5cf69d5-gn9vn" podStartSLOduration=28.650124982 podStartE2EDuration="32.460718116s" podCreationTimestamp="2025-09-12 23:06:25 +0000 UTC" firstStartedPulling="2025-09-12 23:06:52.236563452 +0000 UTC m=+46.504593905" lastFinishedPulling="2025-09-12 23:06:56.047156576 +0000 UTC m=+50.315187039" observedRunningTime="2025-09-12 23:06:57.423315644 +0000 UTC m=+51.691346097" watchObservedRunningTime="2025-09-12 23:06:57.460718116 +0000 UTC m=+51.728748569" Sep 12 23:06:57.604610 systemd-networkd[1498]: cali8e3e7d8a1ef: Gained IPv6LL Sep 12 23:06:57.732163 systemd-networkd[1498]: calibee2d79ce8e: Gained IPv6LL Sep 12 23:06:58.137651 kubelet[2777]: E0912 23:06:58.137499 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:58.137651 kubelet[2777]: E0912 23:06:58.137581 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:58.180188 systemd-networkd[1498]: califd900f76763: Gained IPv6LL Sep 12 23:06:58.436070 systemd-networkd[1498]: cali186defea178: Gained IPv6LL Sep 12 23:06:59.140832 kubelet[2777]: E0912 23:06:59.140606 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:59.141369 kubelet[2777]: E0912 23:06:59.141180 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:06:59.171179 containerd[1580]: time="2025-09-12T23:06:59.171122375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:59.173434 containerd[1580]: time="2025-09-12T23:06:59.173376249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 23:06:59.207835 containerd[1580]: time="2025-09-12T23:06:59.207779483Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:59.215494 containerd[1580]: time="2025-09-12T23:06:59.215094890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:06:59.216282 containerd[1580]: time="2025-09-12T23:06:59.216254966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.168221947s" Sep 12 23:06:59.216366 containerd[1580]: time="2025-09-12T23:06:59.216286426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 23:06:59.218171 containerd[1580]: time="2025-09-12T23:06:59.217991683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:06:59.219906 containerd[1580]: time="2025-09-12T23:06:59.219841536Z" level=info msg="CreateContainer within sandbox \"146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 23:06:59.230672 containerd[1580]: time="2025-09-12T23:06:59.230583196Z" level=info msg="Container 050036610c7dba0dc47ecf4bcce9b552ec2fc03407a9c83118a496a610183300: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:06:59.243295 containerd[1580]: time="2025-09-12T23:06:59.243247710Z" level=info msg="CreateContainer within sandbox \"146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"050036610c7dba0dc47ecf4bcce9b552ec2fc03407a9c83118a496a610183300\"" Sep 12 23:06:59.243836 containerd[1580]: time="2025-09-12T23:06:59.243805836Z" level=info msg="StartContainer for \"050036610c7dba0dc47ecf4bcce9b552ec2fc03407a9c83118a496a610183300\"" Sep 12 23:06:59.244956 containerd[1580]: time="2025-09-12T23:06:59.244932056Z" level=info msg="connecting to shim 050036610c7dba0dc47ecf4bcce9b552ec2fc03407a9c83118a496a610183300" address="unix:///run/containerd/s/329c2df5b3c1c4de40c040a5dbb9739bb8959f161032cd052d2683c29b215ec2" protocol=ttrpc version=3 Sep 12 23:06:59.267125 systemd[1]: Started cri-containerd-050036610c7dba0dc47ecf4bcce9b552ec2fc03407a9c83118a496a610183300.scope - libcontainer container 050036610c7dba0dc47ecf4bcce9b552ec2fc03407a9c83118a496a610183300. Sep 12 23:06:59.428992 containerd[1580]: time="2025-09-12T23:06:59.428781642Z" level=info msg="StartContainer for \"050036610c7dba0dc47ecf4bcce9b552ec2fc03407a9c83118a496a610183300\" returns successfully" Sep 12 23:07:03.703340 containerd[1580]: time="2025-09-12T23:07:03.703279353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:03.704372 containerd[1580]: time="2025-09-12T23:07:03.704332788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 23:07:03.708602 containerd[1580]: time="2025-09-12T23:07:03.708552977Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:03.732220 containerd[1580]: time="2025-09-12T23:07:03.732183338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:03.732983 containerd[1580]: time="2025-09-12T23:07:03.732952697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.51490742s" Sep 12 23:07:03.733060 containerd[1580]: time="2025-09-12T23:07:03.732988184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 23:07:03.734270 containerd[1580]: time="2025-09-12T23:07:03.734227867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:07:03.735219 containerd[1580]: time="2025-09-12T23:07:03.735172252Z" level=info msg="CreateContainer within sandbox \"9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:07:03.747462 containerd[1580]: time="2025-09-12T23:07:03.747411963Z" level=info msg="Container 8a8c3243424f16d36b755f892202e74b31d5e9df7aba08b2d27f5a49bee0fd07: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:03.761726 containerd[1580]: time="2025-09-12T23:07:03.761664552Z" level=info msg="CreateContainer within sandbox \"9b5f177ed7f31e63543242b4e91f7740119336baeb6e0e56f4ead927cc9e00ed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a8c3243424f16d36b755f892202e74b31d5e9df7aba08b2d27f5a49bee0fd07\"" Sep 12 23:07:03.766028 containerd[1580]: time="2025-09-12T23:07:03.765966349Z" level=info msg="StartContainer for \"8a8c3243424f16d36b755f892202e74b31d5e9df7aba08b2d27f5a49bee0fd07\"" Sep 12 23:07:03.767087 containerd[1580]: time="2025-09-12T23:07:03.767058288Z" level=info msg="connecting to shim 8a8c3243424f16d36b755f892202e74b31d5e9df7aba08b2d27f5a49bee0fd07" address="unix:///run/containerd/s/0de501250e66d1de58cd6927e5e1ef5dc7416439db2f5c7e124918fef500d93e" protocol=ttrpc version=3 Sep 12 23:07:03.797193 systemd[1]: Started cri-containerd-8a8c3243424f16d36b755f892202e74b31d5e9df7aba08b2d27f5a49bee0fd07.scope - libcontainer container 8a8c3243424f16d36b755f892202e74b31d5e9df7aba08b2d27f5a49bee0fd07. Sep 12 23:07:03.851460 containerd[1580]: time="2025-09-12T23:07:03.851406330Z" level=info msg="StartContainer for \"8a8c3243424f16d36b755f892202e74b31d5e9df7aba08b2d27f5a49bee0fd07\" returns successfully" Sep 12 23:07:04.304224 systemd[1]: Started sshd@9-10.0.0.126:22-10.0.0.1:57376.service - OpenSSH per-connection server daemon (10.0.0.1:57376). Sep 12 23:07:04.407190 kubelet[2777]: I0912 23:07:04.407082 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dc657dddb-f7sh6" podStartSLOduration=32.77449427 podStartE2EDuration="42.407057054s" podCreationTimestamp="2025-09-12 23:06:22 +0000 UTC" firstStartedPulling="2025-09-12 23:06:54.101215719 +0000 UTC m=+48.369246172" lastFinishedPulling="2025-09-12 23:07:03.733778503 +0000 UTC m=+58.001808956" observedRunningTime="2025-09-12 23:07:04.401567211 +0000 UTC m=+58.669597664" watchObservedRunningTime="2025-09-12 23:07:04.407057054 +0000 UTC m=+58.675087507" Sep 12 23:07:04.546730 sshd[4954]: Accepted publickey for core from 10.0.0.1 port 57376 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:04.548628 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:04.565365 systemd-logind[1515]: New session 10 of user core. Sep 12 23:07:04.575071 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:07:04.787747 sshd[4959]: Connection closed by 10.0.0.1 port 57376 Sep 12 23:07:04.788558 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:04.795387 systemd[1]: sshd@9-10.0.0.126:22-10.0.0.1:57376.service: Deactivated successfully. Sep 12 23:07:04.800084 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:07:04.803324 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:07:04.804555 systemd-logind[1515]: Removed session 10. Sep 12 23:07:05.163571 kubelet[2777]: I0912 23:07:05.163506 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:07:06.593700 containerd[1580]: time="2025-09-12T23:07:06.593632706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:06.594508 containerd[1580]: time="2025-09-12T23:07:06.594473289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 23:07:06.595818 containerd[1580]: time="2025-09-12T23:07:06.595787239Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:06.598248 containerd[1580]: time="2025-09-12T23:07:06.598216419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:06.598816 containerd[1580]: time="2025-09-12T23:07:06.598746685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.864467961s" Sep 12 23:07:06.598816 containerd[1580]: time="2025-09-12T23:07:06.598804076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 23:07:06.600019 containerd[1580]: time="2025-09-12T23:07:06.599968930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:07:06.601111 containerd[1580]: time="2025-09-12T23:07:06.601073439Z" level=info msg="CreateContainer within sandbox \"35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:07:06.614944 containerd[1580]: time="2025-09-12T23:07:06.614883979Z" level=info msg="Container 046d24e40734178d449f1fbb789c4e02e399f6a816cebdd7b86f0d273ecae4ad: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:06.631301 containerd[1580]: time="2025-09-12T23:07:06.631254692Z" level=info msg="CreateContainer within sandbox \"35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"046d24e40734178d449f1fbb789c4e02e399f6a816cebdd7b86f0d273ecae4ad\"" Sep 12 23:07:06.631928 containerd[1580]: time="2025-09-12T23:07:06.631899688Z" level=info msg="StartContainer for \"046d24e40734178d449f1fbb789c4e02e399f6a816cebdd7b86f0d273ecae4ad\"" Sep 12 23:07:06.633649 containerd[1580]: time="2025-09-12T23:07:06.633617915Z" level=info msg="connecting to shim 046d24e40734178d449f1fbb789c4e02e399f6a816cebdd7b86f0d273ecae4ad" address="unix:///run/containerd/s/f9c5d5246f4012bc22d304f16f1f1944b10bda6f0e387c73ee501762dbff851b" protocol=ttrpc version=3 Sep 12 23:07:06.661104 systemd[1]: Started cri-containerd-046d24e40734178d449f1fbb789c4e02e399f6a816cebdd7b86f0d273ecae4ad.scope - libcontainer container 046d24e40734178d449f1fbb789c4e02e399f6a816cebdd7b86f0d273ecae4ad. Sep 12 23:07:06.708574 containerd[1580]: time="2025-09-12T23:07:06.708535088Z" level=info msg="StartContainer for \"046d24e40734178d449f1fbb789c4e02e399f6a816cebdd7b86f0d273ecae4ad\" returns successfully" Sep 12 23:07:07.010958 containerd[1580]: time="2025-09-12T23:07:07.010784192Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:07.012310 containerd[1580]: time="2025-09-12T23:07:07.012137376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 23:07:07.014827 containerd[1580]: time="2025-09-12T23:07:07.014577695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 414.565772ms" Sep 12 23:07:07.014827 containerd[1580]: time="2025-09-12T23:07:07.014641767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 23:07:07.016172 containerd[1580]: time="2025-09-12T23:07:07.016110433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 23:07:07.017615 containerd[1580]: time="2025-09-12T23:07:07.017570612Z" level=info msg="CreateContainer within sandbox \"dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:07:07.039158 containerd[1580]: time="2025-09-12T23:07:07.039059848Z" level=info msg="Container 5c8b40da42627f565a1ad10057756b37ba27be41461036f2e0c46528bc79783f: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:07.060325 containerd[1580]: time="2025-09-12T23:07:07.060239140Z" level=info msg="CreateContainer within sandbox \"dbba2e2dd8471a1c3b579ad748d9ae8f23b9742ddaff69390944105a337f1b2d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5c8b40da42627f565a1ad10057756b37ba27be41461036f2e0c46528bc79783f\"" Sep 12 23:07:07.061549 containerd[1580]: time="2025-09-12T23:07:07.061447226Z" level=info msg="StartContainer for \"5c8b40da42627f565a1ad10057756b37ba27be41461036f2e0c46528bc79783f\"" Sep 12 23:07:07.063177 containerd[1580]: time="2025-09-12T23:07:07.063134079Z" level=info msg="connecting to shim 5c8b40da42627f565a1ad10057756b37ba27be41461036f2e0c46528bc79783f" address="unix:///run/containerd/s/7ec99951c825cfa245bccc1b2d34a506b69d1c6e97e6c31880549e1d4456216a" protocol=ttrpc version=3 Sep 12 23:07:07.102503 systemd[1]: Started cri-containerd-5c8b40da42627f565a1ad10057756b37ba27be41461036f2e0c46528bc79783f.scope - libcontainer container 5c8b40da42627f565a1ad10057756b37ba27be41461036f2e0c46528bc79783f. Sep 12 23:07:07.229304 containerd[1580]: time="2025-09-12T23:07:07.228695120Z" level=info msg="StartContainer for \"5c8b40da42627f565a1ad10057756b37ba27be41461036f2e0c46528bc79783f\" returns successfully" Sep 12 23:07:08.202579 kubelet[2777]: I0912 23:07:08.202479 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dc657dddb-t9mck" podStartSLOduration=35.79309811 podStartE2EDuration="46.202455811s" podCreationTimestamp="2025-09-12 23:06:22 +0000 UTC" firstStartedPulling="2025-09-12 23:06:56.606445062 +0000 UTC m=+50.874475515" lastFinishedPulling="2025-09-12 23:07:07.015802763 +0000 UTC m=+61.283833216" observedRunningTime="2025-09-12 23:07:08.202074962 +0000 UTC m=+62.470105405" watchObservedRunningTime="2025-09-12 23:07:08.202455811 +0000 UTC m=+62.470486264" Sep 12 23:07:09.803452 systemd[1]: Started sshd@10-10.0.0.126:22-10.0.0.1:57378.service - OpenSSH per-connection server daemon (10.0.0.1:57378). Sep 12 23:07:10.062447 sshd[5068]: Accepted publickey for core from 10.0.0.1 port 57378 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:10.065579 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:10.077636 systemd-logind[1515]: New session 11 of user core. Sep 12 23:07:10.086331 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:07:10.399018 containerd[1580]: time="2025-09-12T23:07:10.398814011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\" id:\"f23ddc6f8153801311c8536180ec88905e2d5ea9445721483bd6f83681af8ee8\" pid:5097 exited_at:{seconds:1757718430 nanos:398307741}" Sep 12 23:07:10.400535 sshd[5075]: Connection closed by 10.0.0.1 port 57378 Sep 12 23:07:10.400278 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:10.410462 systemd[1]: sshd@10-10.0.0.126:22-10.0.0.1:57378.service: Deactivated successfully. Sep 12 23:07:10.412311 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:07:10.415589 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:07:10.420443 systemd-logind[1515]: Removed session 11. Sep 12 23:07:10.801587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088003348.mount: Deactivated successfully. Sep 12 23:07:12.942191 containerd[1580]: time="2025-09-12T23:07:12.942103073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:12.943361 containerd[1580]: time="2025-09-12T23:07:12.943310821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 23:07:12.945130 containerd[1580]: time="2025-09-12T23:07:12.945048915Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:12.947954 containerd[1580]: time="2025-09-12T23:07:12.947904415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:12.949015 containerd[1580]: time="2025-09-12T23:07:12.948973819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.932794786s" Sep 12 23:07:12.949015 containerd[1580]: time="2025-09-12T23:07:12.949009958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 23:07:12.950412 containerd[1580]: time="2025-09-12T23:07:12.950358667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 23:07:12.958153 containerd[1580]: time="2025-09-12T23:07:12.958105519Z" level=info msg="CreateContainer within sandbox \"3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 23:07:12.969498 containerd[1580]: time="2025-09-12T23:07:12.969432706Z" level=info msg="Container 97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:12.985733 containerd[1580]: time="2025-09-12T23:07:12.985646588Z" level=info msg="CreateContainer within sandbox \"3762105b565441297ddf5c6be95a47486e7bb51a6be554f0af25326432fa4fa1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\"" Sep 12 23:07:12.986581 containerd[1580]: time="2025-09-12T23:07:12.986541610Z" level=info msg="StartContainer for \"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\"" Sep 12 23:07:12.987888 containerd[1580]: time="2025-09-12T23:07:12.987823801Z" level=info msg="connecting to shim 97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a" address="unix:///run/containerd/s/ce911b74ebd3fb2ed56b6ba527a1b3f68c24bf11260f1594199a09f5f482a4dd" protocol=ttrpc version=3 Sep 12 23:07:13.021149 systemd[1]: Started cri-containerd-97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a.scope - libcontainer container 97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a. Sep 12 23:07:13.082235 containerd[1580]: time="2025-09-12T23:07:13.082159418Z" level=info msg="StartContainer for \"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\" returns successfully" Sep 12 23:07:13.320097 containerd[1580]: time="2025-09-12T23:07:13.320032519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\" id:\"663af3c9b1fc325357f1a89671bf4e2cdb074db2b7a72f27d811729426979d2d\" pid:5175 exit_status:1 exited_at:{seconds:1757718433 nanos:319161755}" Sep 12 23:07:14.325268 containerd[1580]: time="2025-09-12T23:07:14.325191915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\" id:\"50801b11ac44320daa675828f647b3666e9422a3973abbe8bafeafc111d22f08\" pid:5203 exit_status:1 exited_at:{seconds:1757718434 nanos:324750351}" Sep 12 23:07:14.889921 containerd[1580]: time="2025-09-12T23:07:14.889838543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707\" id:\"ead610dc26b88385cafe0eaa577c66223f333849f23d5c87785bd1df2f9c0eb1\" pid:5228 exited_at:{seconds:1757718434 nanos:889447436}" Sep 12 23:07:14.934876 kubelet[2777]: I0912 23:07:14.934705 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-2n8tf" podStartSLOduration=34.667716183 podStartE2EDuration="50.934681172s" podCreationTimestamp="2025-09-12 23:06:24 +0000 UTC" firstStartedPulling="2025-09-12 23:06:56.68316489 +0000 UTC m=+50.951195343" lastFinishedPulling="2025-09-12 23:07:12.950129869 +0000 UTC m=+67.218160332" observedRunningTime="2025-09-12 23:07:13.252724335 +0000 UTC m=+67.520754788" watchObservedRunningTime="2025-09-12 23:07:14.934681172 +0000 UTC m=+69.202711625" Sep 12 23:07:14.994949 containerd[1580]: time="2025-09-12T23:07:14.994831242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707\" id:\"52314e75c6b25ecd9cd4d3b6153b8a172d719f38d57348629a651840b35f0afc\" pid:5252 exited_at:{seconds:1757718434 nanos:994428582}" Sep 12 23:07:15.426069 systemd[1]: Started sshd@11-10.0.0.126:22-10.0.0.1:45982.service - OpenSSH per-connection server daemon (10.0.0.1:45982). Sep 12 23:07:15.529740 sshd[5266]: Accepted publickey for core from 10.0.0.1 port 45982 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:15.531909 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:15.537390 systemd-logind[1515]: New session 12 of user core. Sep 12 23:07:15.547105 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:07:16.037733 sshd[5270]: Connection closed by 10.0.0.1 port 45982 Sep 12 23:07:16.038970 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:16.044688 systemd[1]: sshd@11-10.0.0.126:22-10.0.0.1:45982.service: Deactivated successfully. Sep 12 23:07:16.047000 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:07:16.048400 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:07:16.050066 systemd-logind[1515]: Removed session 12. Sep 12 23:07:16.226042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815243929.mount: Deactivated successfully. Sep 12 23:07:18.341938 containerd[1580]: time="2025-09-12T23:07:18.341796137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:18.343996 containerd[1580]: time="2025-09-12T23:07:18.343956407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 23:07:18.346377 containerd[1580]: time="2025-09-12T23:07:18.346266151Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:18.349672 containerd[1580]: time="2025-09-12T23:07:18.349609918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:18.350642 containerd[1580]: time="2025-09-12T23:07:18.350574847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.400178228s" Sep 12 23:07:18.350642 containerd[1580]: time="2025-09-12T23:07:18.350631776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 23:07:18.353999 containerd[1580]: time="2025-09-12T23:07:18.352579701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:07:18.353999 containerd[1580]: time="2025-09-12T23:07:18.353616338Z" level=info msg="CreateContainer within sandbox \"146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 23:07:18.368146 containerd[1580]: time="2025-09-12T23:07:18.367798421Z" level=info msg="Container 48f5c34b28f3a10c3a85c7c5c26140c1617e8d571ca55fcbc1495f31e60240b1: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:18.385900 containerd[1580]: time="2025-09-12T23:07:18.385822333Z" level=info msg="CreateContainer within sandbox \"146cd33f9a389ff6267276aea33fef416fcfb5d458a5965154ec888c99843bcb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"48f5c34b28f3a10c3a85c7c5c26140c1617e8d571ca55fcbc1495f31e60240b1\"" Sep 12 23:07:18.386550 containerd[1580]: time="2025-09-12T23:07:18.386497360Z" level=info msg="StartContainer for \"48f5c34b28f3a10c3a85c7c5c26140c1617e8d571ca55fcbc1495f31e60240b1\"" Sep 12 23:07:18.387951 containerd[1580]: time="2025-09-12T23:07:18.387907689Z" level=info msg="connecting to shim 48f5c34b28f3a10c3a85c7c5c26140c1617e8d571ca55fcbc1495f31e60240b1" address="unix:///run/containerd/s/329c2df5b3c1c4de40c040a5dbb9739bb8959f161032cd052d2683c29b215ec2" protocol=ttrpc version=3 Sep 12 23:07:18.413026 systemd[1]: Started cri-containerd-48f5c34b28f3a10c3a85c7c5c26140c1617e8d571ca55fcbc1495f31e60240b1.scope - libcontainer container 48f5c34b28f3a10c3a85c7c5c26140c1617e8d571ca55fcbc1495f31e60240b1. Sep 12 23:07:18.473477 containerd[1580]: time="2025-09-12T23:07:18.473337619Z" level=info msg="StartContainer for \"48f5c34b28f3a10c3a85c7c5c26140c1617e8d571ca55fcbc1495f31e60240b1\" returns successfully" Sep 12 23:07:19.516351 kubelet[2777]: I0912 23:07:19.516046 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cb96cb9c4-s2xwd" podStartSLOduration=2.842691273 podStartE2EDuration="28.516022105s" podCreationTimestamp="2025-09-12 23:06:51 +0000 UTC" firstStartedPulling="2025-09-12 23:06:52.678254252 +0000 UTC m=+46.946284705" lastFinishedPulling="2025-09-12 23:07:18.351585074 +0000 UTC m=+72.619615537" observedRunningTime="2025-09-12 23:07:19.514837197 +0000 UTC m=+73.782867650" watchObservedRunningTime="2025-09-12 23:07:19.516022105 +0000 UTC m=+73.784052558" Sep 12 23:07:19.837753 kubelet[2777]: E0912 23:07:19.837567 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:21.056360 systemd[1]: Started sshd@12-10.0.0.126:22-10.0.0.1:47712.service - OpenSSH per-connection server daemon (10.0.0.1:47712). Sep 12 23:07:21.148239 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 47712 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:21.150599 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:21.156721 systemd-logind[1515]: New session 13 of user core. Sep 12 23:07:21.164046 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:07:21.668289 sshd[5339]: Connection closed by 10.0.0.1 port 47712 Sep 12 23:07:21.668696 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:21.680609 systemd[1]: sshd@12-10.0.0.126:22-10.0.0.1:47712.service: Deactivated successfully. Sep 12 23:07:21.682505 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:07:21.683458 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:07:21.686384 systemd[1]: Started sshd@13-10.0.0.126:22-10.0.0.1:47722.service - OpenSSH per-connection server daemon (10.0.0.1:47722). Sep 12 23:07:21.687378 systemd-logind[1515]: Removed session 13. Sep 12 23:07:21.755136 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 47722 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:21.756871 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:21.761644 systemd-logind[1515]: New session 14 of user core. Sep 12 23:07:21.779034 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:07:21.836708 kubelet[2777]: E0912 23:07:21.836640 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:21.978654 sshd[5357]: Connection closed by 10.0.0.1 port 47722 Sep 12 23:07:21.979139 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:21.990133 systemd[1]: sshd@13-10.0.0.126:22-10.0.0.1:47722.service: Deactivated successfully. Sep 12 23:07:21.992363 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:07:21.993322 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:07:21.997401 systemd[1]: Started sshd@14-10.0.0.126:22-10.0.0.1:47736.service - OpenSSH per-connection server daemon (10.0.0.1:47736). Sep 12 23:07:21.998483 systemd-logind[1515]: Removed session 14. Sep 12 23:07:22.054648 sshd[5369]: Accepted publickey for core from 10.0.0.1 port 47736 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:22.056403 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:22.062219 systemd-logind[1515]: New session 15 of user core. Sep 12 23:07:22.069022 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:07:22.074834 containerd[1580]: time="2025-09-12T23:07:22.074792953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:22.129833 containerd[1580]: time="2025-09-12T23:07:22.129442178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 23:07:22.163624 containerd[1580]: time="2025-09-12T23:07:22.163551344Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:22.221954 sshd[5372]: Connection closed by 10.0.0.1 port 47736 Sep 12 23:07:22.222320 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:22.225327 systemd[1]: sshd@14-10.0.0.126:22-10.0.0.1:47736.service: Deactivated successfully. Sep 12 23:07:22.227336 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:07:22.229607 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:07:22.230477 systemd-logind[1515]: Removed session 15. Sep 12 23:07:22.264174 containerd[1580]: time="2025-09-12T23:07:22.264112537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:07:22.265003 containerd[1580]: time="2025-09-12T23:07:22.264894015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.912277814s" Sep 12 23:07:22.265003 containerd[1580]: time="2025-09-12T23:07:22.264934683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 23:07:22.277031 containerd[1580]: time="2025-09-12T23:07:22.276961953Z" level=info msg="CreateContainer within sandbox \"35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:07:22.315324 containerd[1580]: time="2025-09-12T23:07:22.315270755Z" level=info msg="Container ef6df85a9d5b553eb8973cbe7deeab9eac6982bc610c84d85acbeca9582b8e29: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:07:22.323194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819452117.mount: Deactivated successfully. Sep 12 23:07:22.377258 containerd[1580]: time="2025-09-12T23:07:22.377180915Z" level=info msg="CreateContainer within sandbox \"35f8d45443c68ce671cf7b7d52f48019292c3ea8f0fbae189264381c8acab431\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ef6df85a9d5b553eb8973cbe7deeab9eac6982bc610c84d85acbeca9582b8e29\"" Sep 12 23:07:22.378094 containerd[1580]: time="2025-09-12T23:07:22.378009273Z" level=info msg="StartContainer for \"ef6df85a9d5b553eb8973cbe7deeab9eac6982bc610c84d85acbeca9582b8e29\"" Sep 12 23:07:22.379779 containerd[1580]: time="2025-09-12T23:07:22.379642241Z" level=info msg="connecting to shim ef6df85a9d5b553eb8973cbe7deeab9eac6982bc610c84d85acbeca9582b8e29" address="unix:///run/containerd/s/f9c5d5246f4012bc22d304f16f1f1944b10bda6f0e387c73ee501762dbff851b" protocol=ttrpc version=3 Sep 12 23:07:22.417147 systemd[1]: Started cri-containerd-ef6df85a9d5b553eb8973cbe7deeab9eac6982bc610c84d85acbeca9582b8e29.scope - libcontainer container ef6df85a9d5b553eb8973cbe7deeab9eac6982bc610c84d85acbeca9582b8e29. Sep 12 23:07:22.474751 containerd[1580]: time="2025-09-12T23:07:22.474698654Z" level=info msg="StartContainer for \"ef6df85a9d5b553eb8973cbe7deeab9eac6982bc610c84d85acbeca9582b8e29\" returns successfully" Sep 12 23:07:22.933002 kubelet[2777]: I0912 23:07:22.932916 2777 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:07:22.933646 kubelet[2777]: I0912 23:07:22.932999 2777 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:07:23.272519 kubelet[2777]: I0912 23:07:23.272432 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2rn62" podStartSLOduration=32.49658939 podStartE2EDuration="58.272402803s" podCreationTimestamp="2025-09-12 23:06:25 +0000 UTC" firstStartedPulling="2025-09-12 23:06:56.490555631 +0000 UTC m=+50.758586084" lastFinishedPulling="2025-09-12 23:07:22.266369033 +0000 UTC m=+76.534399497" observedRunningTime="2025-09-12 23:07:23.272220576 +0000 UTC m=+77.540251059" watchObservedRunningTime="2025-09-12 23:07:23.272402803 +0000 UTC m=+77.540433276" Sep 12 23:07:27.244444 systemd[1]: Started sshd@15-10.0.0.126:22-10.0.0.1:47746.service - OpenSSH per-connection server daemon (10.0.0.1:47746). Sep 12 23:07:27.331191 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 47746 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:27.332889 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:27.342357 systemd-logind[1515]: New session 16 of user core. Sep 12 23:07:27.347073 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:07:27.627478 sshd[5425]: Connection closed by 10.0.0.1 port 47746 Sep 12 23:07:27.628592 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:27.636100 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:07:27.636346 systemd[1]: sshd@15-10.0.0.126:22-10.0.0.1:47746.service: Deactivated successfully. Sep 12 23:07:27.639207 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:07:27.641703 systemd-logind[1515]: Removed session 16. Sep 12 23:07:29.837120 kubelet[2777]: E0912 23:07:29.837058 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:31.624625 kubelet[2777]: I0912 23:07:31.624560 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:07:32.640992 systemd[1]: Started sshd@16-10.0.0.126:22-10.0.0.1:43334.service - OpenSSH per-connection server daemon (10.0.0.1:43334). Sep 12 23:07:32.797815 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 43334 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:32.799368 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:32.804418 systemd-logind[1515]: New session 17 of user core. Sep 12 23:07:32.807986 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:07:32.999274 sshd[5449]: Connection closed by 10.0.0.1 port 43334 Sep 12 23:07:32.999767 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:33.003989 systemd[1]: sshd@16-10.0.0.126:22-10.0.0.1:43334.service: Deactivated successfully. Sep 12 23:07:33.006411 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:07:33.009511 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:07:33.010479 systemd-logind[1515]: Removed session 17. Sep 12 23:07:37.836974 kubelet[2777]: E0912 23:07:37.836908 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:38.011994 systemd[1]: Started sshd@17-10.0.0.126:22-10.0.0.1:43346.service - OpenSSH per-connection server daemon (10.0.0.1:43346). Sep 12 23:07:38.077577 sshd[5468]: Accepted publickey for core from 10.0.0.1 port 43346 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:38.079541 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:38.084769 systemd-logind[1515]: New session 18 of user core. Sep 12 23:07:38.091992 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:07:38.230110 sshd[5471]: Connection closed by 10.0.0.1 port 43346 Sep 12 23:07:38.230480 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:38.235180 systemd[1]: sshd@17-10.0.0.126:22-10.0.0.1:43346.service: Deactivated successfully. Sep 12 23:07:38.237295 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:07:38.238174 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:07:38.239703 systemd-logind[1515]: Removed session 18. Sep 12 23:07:40.427666 containerd[1580]: time="2025-09-12T23:07:40.427612443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\" id:\"01703072967ca0c46f11d5ea6a7ac4bbf119965943ae7a812ccc2b57281031c3\" pid:5501 exited_at:{seconds:1757718460 nanos:427020150}" Sep 12 23:07:40.448479 containerd[1580]: time="2025-09-12T23:07:40.448403517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\" id:\"54c6cf5bf899efd97e7517d8b77e04b6e7da43175f392f01f993ba03ecc687b2\" pid:5513 exited_at:{seconds:1757718460 nanos:447886006}" Sep 12 23:07:41.838805 kubelet[2777]: E0912 23:07:41.837914 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:07:43.245733 systemd[1]: Started sshd@18-10.0.0.126:22-10.0.0.1:41644.service - OpenSSH per-connection server daemon (10.0.0.1:41644). Sep 12 23:07:43.402956 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 41644 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:43.405069 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:43.410378 systemd-logind[1515]: New session 19 of user core. Sep 12 23:07:43.419153 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:07:43.662647 sshd[5538]: Connection closed by 10.0.0.1 port 41644 Sep 12 23:07:43.663071 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:43.668998 systemd[1]: sshd@18-10.0.0.126:22-10.0.0.1:41644.service: Deactivated successfully. Sep 12 23:07:43.671253 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:07:43.672305 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:07:43.673984 systemd-logind[1515]: Removed session 19. Sep 12 23:07:44.899687 containerd[1580]: time="2025-09-12T23:07:44.899623845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707\" id:\"77ca50b525d89ad147631ab8174cf2f13910a4bd351c040ed053d6cd4de40e1e\" pid:5562 exited_at:{seconds:1757718464 nanos:899243815}" Sep 12 23:07:48.692777 systemd[1]: Started sshd@19-10.0.0.126:22-10.0.0.1:41648.service - OpenSSH per-connection server daemon (10.0.0.1:41648). Sep 12 23:07:48.815712 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 41648 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:48.819153 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:48.829063 systemd-logind[1515]: New session 20 of user core. Sep 12 23:07:48.836260 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:07:49.311568 sshd[5579]: Connection closed by 10.0.0.1 port 41648 Sep 12 23:07:49.312359 sshd-session[5576]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:49.324760 systemd[1]: sshd@19-10.0.0.126:22-10.0.0.1:41648.service: Deactivated successfully. Sep 12 23:07:49.329911 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:07:49.335655 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:07:49.341289 systemd[1]: Started sshd@20-10.0.0.126:22-10.0.0.1:41652.service - OpenSSH per-connection server daemon (10.0.0.1:41652). Sep 12 23:07:49.345447 systemd-logind[1515]: Removed session 20. Sep 12 23:07:49.428883 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 41652 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:49.429440 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:49.434293 systemd-logind[1515]: New session 21 of user core. Sep 12 23:07:49.443002 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:07:50.368955 sshd[5596]: Connection closed by 10.0.0.1 port 41652 Sep 12 23:07:50.370997 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:50.383065 systemd[1]: Started sshd@21-10.0.0.126:22-10.0.0.1:36366.service - OpenSSH per-connection server daemon (10.0.0.1:36366). Sep 12 23:07:50.383697 systemd[1]: sshd@20-10.0.0.126:22-10.0.0.1:41652.service: Deactivated successfully. Sep 12 23:07:50.392355 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:07:50.393494 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:07:50.396439 systemd-logind[1515]: Removed session 21. Sep 12 23:07:50.467506 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 36366 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:50.469796 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:50.477949 systemd-logind[1515]: New session 22 of user core. Sep 12 23:07:50.487216 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:07:52.033983 containerd[1580]: time="2025-09-12T23:07:52.033930032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\" id:\"b58f0b5fe7dbaf6d3b9402e474ccf74ce1893205ab8b4d3908d39b857c945268\" pid:5649 exited_at:{seconds:1757718472 nanos:33087388}" Sep 12 23:07:52.688411 sshd[5611]: Connection closed by 10.0.0.1 port 36366 Sep 12 23:07:52.693184 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:52.700395 systemd[1]: sshd@21-10.0.0.126:22-10.0.0.1:36366.service: Deactivated successfully. Sep 12 23:07:52.703332 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:07:52.703756 systemd[1]: session-22.scope: Consumed 699ms CPU time, 72.9M memory peak. Sep 12 23:07:52.705026 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:07:52.712148 systemd[1]: Started sshd@22-10.0.0.126:22-10.0.0.1:36382.service - OpenSSH per-connection server daemon (10.0.0.1:36382). Sep 12 23:07:52.713647 systemd-logind[1515]: Removed session 22. Sep 12 23:07:52.790934 sshd[5667]: Accepted publickey for core from 10.0.0.1 port 36382 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:52.791734 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:52.804001 systemd-logind[1515]: New session 23 of user core. Sep 12 23:07:52.811144 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:07:53.609234 sshd[5670]: Connection closed by 10.0.0.1 port 36382 Sep 12 23:07:53.610265 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:53.626703 systemd[1]: sshd@22-10.0.0.126:22-10.0.0.1:36382.service: Deactivated successfully. Sep 12 23:07:53.631400 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:07:53.632996 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:07:53.638771 systemd[1]: Started sshd@23-10.0.0.126:22-10.0.0.1:36388.service - OpenSSH per-connection server daemon (10.0.0.1:36388). Sep 12 23:07:53.641632 systemd-logind[1515]: Removed session 23. Sep 12 23:07:53.754879 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 36388 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:53.757367 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:53.763569 systemd-logind[1515]: New session 24 of user core. Sep 12 23:07:53.775200 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:07:53.921790 sshd[5685]: Connection closed by 10.0.0.1 port 36388 Sep 12 23:07:53.923147 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:53.929201 systemd[1]: sshd@23-10.0.0.126:22-10.0.0.1:36388.service: Deactivated successfully. Sep 12 23:07:53.931906 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:07:53.934028 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:07:53.935602 systemd-logind[1515]: Removed session 24. Sep 12 23:07:58.937759 systemd[1]: Started sshd@24-10.0.0.126:22-10.0.0.1:36394.service - OpenSSH per-connection server daemon (10.0.0.1:36394). Sep 12 23:07:59.002318 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 36394 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:07:59.004231 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:07:59.009795 systemd-logind[1515]: New session 25 of user core. Sep 12 23:07:59.016049 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 23:07:59.137206 sshd[5701]: Connection closed by 10.0.0.1 port 36394 Sep 12 23:07:59.137573 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Sep 12 23:07:59.141956 systemd-logind[1515]: Session 25 logged out. Waiting for processes to exit. Sep 12 23:07:59.142068 systemd[1]: sshd@24-10.0.0.126:22-10.0.0.1:36394.service: Deactivated successfully. Sep 12 23:07:59.144707 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 23:07:59.149178 systemd-logind[1515]: Removed session 25. Sep 12 23:08:04.151429 systemd[1]: Started sshd@25-10.0.0.126:22-10.0.0.1:55406.service - OpenSSH per-connection server daemon (10.0.0.1:55406). Sep 12 23:08:04.206493 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 55406 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:04.208559 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:04.215364 systemd-logind[1515]: New session 26 of user core. Sep 12 23:08:04.230118 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 23:08:04.347079 sshd[5720]: Connection closed by 10.0.0.1 port 55406 Sep 12 23:08:04.347464 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:04.352320 systemd[1]: sshd@25-10.0.0.126:22-10.0.0.1:55406.service: Deactivated successfully. Sep 12 23:08:04.354246 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 23:08:04.355146 systemd-logind[1515]: Session 26 logged out. Waiting for processes to exit. Sep 12 23:08:04.356373 systemd-logind[1515]: Removed session 26. Sep 12 23:08:05.179278 containerd[1580]: time="2025-09-12T23:08:05.179202414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\" id:\"51c9d8cab691a6cbfc94fd852e4cc57f5edf15962686d70ad78641e775e35f92\" pid:5744 exited_at:{seconds:1757718485 nanos:178048716}" Sep 12 23:08:05.837507 kubelet[2777]: E0912 23:08:05.837407 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:06.836453 kubelet[2777]: E0912 23:08:06.836381 2777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:08:09.365655 systemd[1]: Started sshd@26-10.0.0.126:22-10.0.0.1:55412.service - OpenSSH per-connection server daemon (10.0.0.1:55412). Sep 12 23:08:09.420388 sshd[5758]: Accepted publickey for core from 10.0.0.1 port 55412 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:09.422475 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:09.428289 systemd-logind[1515]: New session 27 of user core. Sep 12 23:08:09.441044 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 23:08:09.574749 sshd[5761]: Connection closed by 10.0.0.1 port 55412 Sep 12 23:08:09.575137 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:09.581421 systemd[1]: sshd@26-10.0.0.126:22-10.0.0.1:55412.service: Deactivated successfully. Sep 12 23:08:09.584428 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 23:08:09.585525 systemd-logind[1515]: Session 27 logged out. Waiting for processes to exit. Sep 12 23:08:09.587608 systemd-logind[1515]: Removed session 27. Sep 12 23:08:10.336007 containerd[1580]: time="2025-09-12T23:08:10.335959156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"545e3032a6b4cd7f9d1e252a25925670d3a701f78680a80103e9dc1e2ed04c13\" id:\"840c814db68dbb004fe7a8c5d66a17dc20ec32189ca69cf5eabd37e95e4f712c\" pid:5792 exited_at:{seconds:1757718490 nanos:335467568}" Sep 12 23:08:10.390969 containerd[1580]: time="2025-09-12T23:08:10.390909748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"97df162f2abb434de31c8c58ab65b9c4e2905f309c0347a592ca99b3b4594a5a\" id:\"ce481497ebe13a655b8a12f54ec0f1282a792b096d1e6a9009108b57fc15cfd1\" pid:5804 exited_at:{seconds:1757718490 nanos:390575467}" Sep 12 23:08:14.593665 systemd[1]: Started sshd@27-10.0.0.126:22-10.0.0.1:50658.service - OpenSSH per-connection server daemon (10.0.0.1:50658). Sep 12 23:08:14.686535 sshd[5829]: Accepted publickey for core from 10.0.0.1 port 50658 ssh2: RSA SHA256:AJXFPvfa6P0uoKREGLBBCMsQReZl0x2RPvoaq8XPvvc Sep 12 23:08:14.689231 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:14.695190 systemd-logind[1515]: New session 28 of user core. Sep 12 23:08:14.699044 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 23:08:14.851130 sshd[5834]: Connection closed by 10.0.0.1 port 50658 Sep 12 23:08:14.851698 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:14.857459 systemd[1]: sshd@27-10.0.0.126:22-10.0.0.1:50658.service: Deactivated successfully. Sep 12 23:08:14.860833 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 23:08:14.862546 systemd-logind[1515]: Session 28 logged out. Waiting for processes to exit. Sep 12 23:08:14.864780 systemd-logind[1515]: Removed session 28. Sep 12 23:08:14.931629 containerd[1580]: time="2025-09-12T23:08:14.931555687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9fdb489710cb025094ccc9603657a8d185cea8e0d9064dd98906e06f8aea707\" id:\"192fe4784b3cde902500bc1209218998632804341a9e1c6ae7f5734bc85617ed\" pid:5857 exited_at:{seconds:1757718494 nanos:930885131}"