Sep 9 05:29:15.028737 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:29:15.028763 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:29:15.028775 kernel: BIOS-provided physical RAM map: Sep 9 05:29:15.028782 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 9 05:29:15.028789 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 9 05:29:15.028795 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 9 05:29:15.028803 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 9 05:29:15.028810 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 9 05:29:15.028819 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 9 05:29:15.028826 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 9 05:29:15.028833 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 9 05:29:15.028842 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 9 05:29:15.028848 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 9 05:29:15.028855 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 9 05:29:15.028864 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 9 05:29:15.028871 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 9 05:29:15.028883 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:29:15.028890 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:29:15.028898 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:29:15.028905 kernel: NX (Execute Disable) protection: active Sep 9 05:29:15.028912 kernel: APIC: Static calls initialized Sep 9 05:29:15.028919 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 9 05:29:15.028927 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 9 05:29:15.028934 kernel: extended physical RAM map: Sep 9 05:29:15.028941 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 9 05:29:15.028949 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 9 05:29:15.028956 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 9 05:29:15.028966 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 9 05:29:15.028973 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 9 05:29:15.028980 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 9 05:29:15.028987 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 9 05:29:15.028995 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 9 05:29:15.029002 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 9 05:29:15.029009 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 9 05:29:15.029016 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 9 05:29:15.029033 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 9 05:29:15.029040 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 9 05:29:15.029048 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 9 05:29:15.029057 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 9 05:29:15.029065 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 9 05:29:15.029076 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 9 05:29:15.029083 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:29:15.029091 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:29:15.029099 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:29:15.029108 kernel: efi: EFI v2.7 by EDK II Sep 9 05:29:15.029116 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 9 05:29:15.029124 kernel: random: crng init done Sep 9 05:29:15.029131 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 9 05:29:15.029139 kernel: secureboot: Secure boot enabled Sep 9 05:29:15.029146 kernel: SMBIOS 2.8 present. Sep 9 05:29:15.029153 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 05:29:15.029161 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:29:15.029168 kernel: Hypervisor detected: KVM Sep 9 05:29:15.029176 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:29:15.029183 kernel: kvm-clock: using sched offset of 7179588385 cycles Sep 9 05:29:15.029194 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:29:15.029202 kernel: tsc: Detected 2794.748 MHz processor Sep 9 05:29:15.029210 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:29:15.029217 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:29:15.029225 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 9 05:29:15.029233 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 05:29:15.029243 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:29:15.029253 kernel: Using GB pages for direct mapping Sep 9 05:29:15.029263 kernel: ACPI: Early table checksum verification disabled Sep 9 05:29:15.029273 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 9 05:29:15.029281 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 05:29:15.029289 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:29:15.029297 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:29:15.029304 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 9 05:29:15.029312 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:29:15.029320 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:29:15.029328 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:29:15.029336 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:29:15.029345 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 05:29:15.029353 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 9 05:29:15.029361 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 9 05:29:15.029369 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 9 05:29:15.029376 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 9 05:29:15.029384 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 9 05:29:15.029392 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 9 05:29:15.029399 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 9 05:29:15.029409 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 9 05:29:15.029417 kernel: No NUMA configuration found Sep 9 05:29:15.029425 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 9 05:29:15.029432 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 9 05:29:15.029440 kernel: Zone ranges: Sep 9 05:29:15.029448 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:29:15.029456 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 9 05:29:15.029463 kernel: Normal empty Sep 9 05:29:15.029471 kernel: Device empty Sep 9 05:29:15.029479 kernel: Movable zone start for each node Sep 9 05:29:15.029488 kernel: Early memory node ranges Sep 9 05:29:15.029496 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 9 05:29:15.029504 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 9 05:29:15.029511 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 9 05:29:15.029519 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 9 05:29:15.029527 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 9 05:29:15.029535 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 9 05:29:15.029568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:29:15.029576 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 9 05:29:15.029587 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 05:29:15.029595 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 05:29:15.029603 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 05:29:15.029611 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 9 05:29:15.029618 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 05:29:15.029626 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:29:15.029634 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:29:15.029641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 05:29:15.029650 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:29:15.029666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:29:15.029676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:29:15.029686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:29:15.029695 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:29:15.029705 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 05:29:15.029715 kernel: TSC deadline timer available Sep 9 05:29:15.029724 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:29:15.029732 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:29:15.029742 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:29:15.029757 kernel: CPU topo: Max. threads per core: 1 Sep 9 05:29:15.029764 kernel: CPU topo: Num. cores per package: 4 Sep 9 05:29:15.029772 kernel: CPU topo: Num. threads per package: 4 Sep 9 05:29:15.029782 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 05:29:15.029793 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 05:29:15.029801 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 05:29:15.029809 kernel: kvm-guest: setup PV sched yield Sep 9 05:29:15.029817 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 05:29:15.029827 kernel: Booting paravirtualized kernel on KVM Sep 9 05:29:15.029835 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:29:15.029843 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 05:29:15.029851 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 05:29:15.029859 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 05:29:15.029867 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 05:29:15.029877 kernel: kvm-guest: PV spinlocks enabled Sep 9 05:29:15.029887 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:29:15.029899 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:29:15.029913 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:29:15.029923 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:29:15.029933 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:29:15.029944 kernel: Fallback order for Node 0: 0 Sep 9 05:29:15.029953 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 9 05:29:15.029962 kernel: Policy zone: DMA32 Sep 9 05:29:15.029970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:29:15.029978 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 05:29:15.029988 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:29:15.029996 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:29:15.030004 kernel: Dynamic Preempt: voluntary Sep 9 05:29:15.030012 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:29:15.030029 kernel: rcu: RCU event tracing is enabled. Sep 9 05:29:15.030037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 05:29:15.030046 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:29:15.030055 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:29:15.030062 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:29:15.030070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:29:15.030081 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 05:29:15.030089 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:29:15.030097 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:29:15.030108 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:29:15.030116 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 05:29:15.030124 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:29:15.030132 kernel: Console: colour dummy device 80x25 Sep 9 05:29:15.030140 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:29:15.030153 kernel: ACPI: Core revision 20240827 Sep 9 05:29:15.030163 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 05:29:15.030174 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:29:15.030185 kernel: x2apic enabled Sep 9 05:29:15.030195 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:29:15.030205 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 05:29:15.030216 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 05:29:15.030226 kernel: kvm-guest: setup PV IPIs Sep 9 05:29:15.030235 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 05:29:15.030245 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 05:29:15.030259 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 05:29:15.030269 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 05:29:15.030279 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 05:29:15.030289 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 05:29:15.030303 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:29:15.030313 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:29:15.030323 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:29:15.030334 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 05:29:15.030348 kernel: active return thunk: retbleed_return_thunk Sep 9 05:29:15.030357 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 05:29:15.030365 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:29:15.030373 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:29:15.030381 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 05:29:15.030390 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 05:29:15.030398 kernel: active return thunk: srso_return_thunk Sep 9 05:29:15.030405 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 05:29:15.030413 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:29:15.030424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:29:15.030432 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:29:15.030442 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:29:15.030453 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 05:29:15.030463 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:29:15.030474 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:29:15.030483 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:29:15.030491 kernel: landlock: Up and running. Sep 9 05:29:15.030499 kernel: SELinux: Initializing. Sep 9 05:29:15.030510 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:29:15.030518 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:29:15.030526 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 05:29:15.030534 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 05:29:15.030561 kernel: ... version: 0 Sep 9 05:29:15.030573 kernel: ... bit width: 48 Sep 9 05:29:15.030584 kernel: ... generic registers: 6 Sep 9 05:29:15.030594 kernel: ... value mask: 0000ffffffffffff Sep 9 05:29:15.030604 kernel: ... max period: 00007fffffffffff Sep 9 05:29:15.030618 kernel: ... fixed-purpose events: 0 Sep 9 05:29:15.030628 kernel: ... event mask: 000000000000003f Sep 9 05:29:15.030638 kernel: signal: max sigframe size: 1776 Sep 9 05:29:15.030649 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:29:15.030659 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:29:15.030669 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:29:15.030679 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:29:15.030689 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:29:15.030699 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 05:29:15.030712 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 05:29:15.030722 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 05:29:15.030732 kernel: Memory: 2409220K/2552216K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 137068K reserved, 0K cma-reserved) Sep 9 05:29:15.030740 kernel: devtmpfs: initialized Sep 9 05:29:15.030748 kernel: x86/mm: Memory block size: 128MB Sep 9 05:29:15.030756 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 9 05:29:15.030764 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 9 05:29:15.030772 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:29:15.030780 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 05:29:15.030791 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:29:15.030799 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:29:15.030807 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:29:15.030814 kernel: audit: type=2000 audit(1757395750.753:1): state=initialized audit_enabled=0 res=1 Sep 9 05:29:15.030823 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:29:15.030831 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:29:15.030838 kernel: cpuidle: using governor menu Sep 9 05:29:15.030846 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:29:15.030856 kernel: dca service started, version 1.12.1 Sep 9 05:29:15.030864 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 05:29:15.030876 kernel: PCI: Using configuration type 1 for base access Sep 9 05:29:15.030884 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:29:15.030892 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:29:15.030899 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:29:15.030907 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:29:15.030915 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:29:15.030923 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:29:15.030934 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:29:15.030942 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:29:15.030949 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:29:15.030957 kernel: ACPI: Interpreter enabled Sep 9 05:29:15.030965 kernel: ACPI: PM: (supports S0 S5) Sep 9 05:29:15.030973 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:29:15.030981 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:29:15.030989 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 05:29:15.030997 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 05:29:15.031005 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:29:15.031444 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:29:15.031634 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 05:29:15.031803 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 05:29:15.031819 kernel: PCI host bridge to bus 0000:00 Sep 9 05:29:15.031979 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:29:15.032111 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:29:15.032262 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:29:15.032390 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 05:29:15.032528 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 05:29:15.032719 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:29:15.032891 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:29:15.033159 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:29:15.033315 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 05:29:15.033439 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 05:29:15.033594 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 05:29:15.033740 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 05:29:15.033865 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 05:29:15.034007 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:29:15.034148 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 05:29:15.034278 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 05:29:15.034399 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 05:29:15.034556 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:29:15.034725 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 05:29:15.034915 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 05:29:15.035059 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 05:29:15.035197 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:29:15.035332 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 05:29:15.035453 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 05:29:15.035620 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 05:29:15.035771 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 05:29:15.035987 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:29:15.036152 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 05:29:15.036346 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 05:29:15.036496 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 05:29:15.036716 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 05:29:15.036861 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 05:29:15.036984 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 05:29:15.036995 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:29:15.037004 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:29:15.037012 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:29:15.037036 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:29:15.037045 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 05:29:15.037053 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 05:29:15.037061 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 05:29:15.037069 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 05:29:15.037077 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 05:29:15.037085 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 05:29:15.037093 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 05:29:15.037101 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 05:29:15.037112 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 05:29:15.037120 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 05:29:15.037128 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 05:29:15.037136 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 05:29:15.037145 kernel: iommu: Default domain type: Translated Sep 9 05:29:15.037155 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:29:15.037167 kernel: efivars: Registered efivars operations Sep 9 05:29:15.037178 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:29:15.037189 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:29:15.037204 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 9 05:29:15.037215 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 9 05:29:15.037226 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 9 05:29:15.037236 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 9 05:29:15.037246 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 9 05:29:15.037408 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 05:29:15.037556 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 05:29:15.037680 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 05:29:15.037696 kernel: vgaarb: loaded Sep 9 05:29:15.037704 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 05:29:15.037712 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 05:29:15.037720 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:29:15.037729 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:29:15.037737 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:29:15.037745 kernel: pnp: PnP ACPI init Sep 9 05:29:15.037891 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 05:29:15.037903 kernel: pnp: PnP ACPI: found 6 devices Sep 9 05:29:15.037915 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:29:15.037923 kernel: NET: Registered PF_INET protocol family Sep 9 05:29:15.037932 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:29:15.037940 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:29:15.037953 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:29:15.037961 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:29:15.037969 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:29:15.037977 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:29:15.037989 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:29:15.037997 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:29:15.038005 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:29:15.038013 kernel: NET: Registered PF_XDP protocol family Sep 9 05:29:15.038153 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 05:29:15.038299 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 05:29:15.038454 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:29:15.038622 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:29:15.038743 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:29:15.038879 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 05:29:15.039010 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 05:29:15.039149 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:29:15.039162 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:29:15.039172 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 05:29:15.039181 kernel: Initialise system trusted keyrings Sep 9 05:29:15.039191 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:29:15.039200 kernel: Key type asymmetric registered Sep 9 05:29:15.039215 kernel: Asymmetric key parser 'x509' registered Sep 9 05:29:15.039242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:29:15.039254 kernel: io scheduler mq-deadline registered Sep 9 05:29:15.039264 kernel: io scheduler kyber registered Sep 9 05:29:15.039274 kernel: io scheduler bfq registered Sep 9 05:29:15.039284 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:29:15.039294 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 05:29:15.039304 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 05:29:15.039314 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 05:29:15.039326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:29:15.039336 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:29:15.039346 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:29:15.039356 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:29:15.039365 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:29:15.039515 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 05:29:15.039664 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 05:29:15.039788 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T05:29:14 UTC (1757395754) Sep 9 05:29:15.039919 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 05:29:15.039931 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 05:29:15.039940 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 05:29:15.039949 kernel: efifb: probing for efifb Sep 9 05:29:15.039958 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 05:29:15.039966 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 05:29:15.039975 kernel: efifb: scrolling: redraw Sep 9 05:29:15.039983 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 05:29:15.039992 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 05:29:15.040004 kernel: fb0: EFI VGA frame buffer device Sep 9 05:29:15.040015 kernel: pstore: Using crash dump compression: deflate Sep 9 05:29:15.040037 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:29:15.040045 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:29:15.040054 kernel: Segment Routing with IPv6 Sep 9 05:29:15.040063 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:29:15.040074 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:29:15.040083 kernel: Key type dns_resolver registered Sep 9 05:29:15.040091 kernel: IPI shorthand broadcast: enabled Sep 9 05:29:15.040100 kernel: sched_clock: Marking stable (3922007549, 150385640)->(4101503087, -29109898) Sep 9 05:29:15.040108 kernel: registered taskstats version 1 Sep 9 05:29:15.040117 kernel: Loading compiled-in X.509 certificates Sep 9 05:29:15.040126 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:29:15.040134 kernel: Demotion targets for Node 0: null Sep 9 05:29:15.040143 kernel: Key type .fscrypt registered Sep 9 05:29:15.040154 kernel: Key type fscrypt-provisioning registered Sep 9 05:29:15.040162 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:29:15.040173 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:29:15.040182 kernel: ima: No architecture policies found Sep 9 05:29:15.040190 kernel: clk: Disabling unused clocks Sep 9 05:29:15.040198 kernel: Warning: unable to open an initial console. Sep 9 05:29:15.040207 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:29:15.040216 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:29:15.040227 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:29:15.040236 kernel: Run /init as init process Sep 9 05:29:15.040244 kernel: with arguments: Sep 9 05:29:15.040253 kernel: /init Sep 9 05:29:15.040261 kernel: with environment: Sep 9 05:29:15.040270 kernel: HOME=/ Sep 9 05:29:15.040278 kernel: TERM=linux Sep 9 05:29:15.040286 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:29:15.040299 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:29:15.040317 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:29:15.040328 systemd[1]: Detected virtualization kvm. Sep 9 05:29:15.040338 systemd[1]: Detected architecture x86-64. Sep 9 05:29:15.040346 systemd[1]: Running in initrd. Sep 9 05:29:15.040355 systemd[1]: No hostname configured, using default hostname. Sep 9 05:29:15.040364 systemd[1]: Hostname set to . Sep 9 05:29:15.040373 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:29:15.040384 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:29:15.040393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:29:15.040402 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:29:15.040412 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:29:15.040421 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:29:15.040430 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:29:15.040440 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:29:15.040452 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:29:15.040461 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:29:15.040470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:29:15.040479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:29:15.040489 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:29:15.040497 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:29:15.040506 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:29:15.040515 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:29:15.040528 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:29:15.040550 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:29:15.040560 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:29:15.040569 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:29:15.040578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:29:15.040588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:29:15.040597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:29:15.040606 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:29:15.040615 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:29:15.040627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:29:15.040636 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:29:15.040645 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:29:15.040655 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:29:15.040664 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:29:15.040673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:29:15.040682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:29:15.040691 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:29:15.040703 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:29:15.040713 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:29:15.040722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:29:15.040767 systemd-journald[219]: Collecting audit messages is disabled. Sep 9 05:29:15.040793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:29:15.040805 systemd-journald[219]: Journal started Sep 9 05:29:15.040829 systemd-journald[219]: Runtime Journal (/run/log/journal/1d7f97fa6fb44edeb93ba4f077ac7f47) is 6M, max 48.2M, 42.2M free. Sep 9 05:29:15.033152 systemd-modules-load[222]: Inserted module 'overlay' Sep 9 05:29:15.043612 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:29:15.047045 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:29:15.053972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:29:15.057949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:29:15.061890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:29:15.064699 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:29:15.066592 kernel: Bridge firewalling registered Sep 9 05:29:15.066609 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 9 05:29:15.073719 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:29:15.075262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:29:15.088193 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:29:15.088968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:29:15.094128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:29:15.095046 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:29:15.098638 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:29:15.105717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:29:15.116565 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:29:15.153124 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:29:15.165717 systemd-resolved[260]: Positive Trust Anchors: Sep 9 05:29:15.165738 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:29:15.165779 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:29:15.169322 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 9 05:29:15.170839 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:29:15.177258 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:29:15.317622 kernel: SCSI subsystem initialized Sep 9 05:29:15.328638 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:29:15.345953 kernel: iscsi: registered transport (tcp) Sep 9 05:29:15.378631 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:29:15.378722 kernel: QLogic iSCSI HBA Driver Sep 9 05:29:15.410919 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:29:15.441981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:29:15.446423 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:29:15.525161 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:29:15.543053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:29:15.610619 kernel: raid6: avx2x4 gen() 18897 MB/s Sep 9 05:29:15.644596 kernel: raid6: avx2x2 gen() 26575 MB/s Sep 9 05:29:15.677654 kernel: raid6: avx2x1 gen() 24769 MB/s Sep 9 05:29:15.677741 kernel: raid6: using algorithm avx2x2 gen() 26575 MB/s Sep 9 05:29:15.695963 kernel: raid6: .... xor() 12547 MB/s, rmw enabled Sep 9 05:29:15.695993 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:29:15.717586 kernel: xor: automatically using best checksumming function avx Sep 9 05:29:15.903584 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:29:15.913926 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:29:15.941902 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:29:15.978857 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 05:29:15.986123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:29:16.009710 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:29:16.042589 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 9 05:29:16.082406 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:29:16.085924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:29:16.183953 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:29:16.188688 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:29:16.232572 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 05:29:16.234737 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 05:29:16.242328 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:29:16.242362 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:29:16.242378 kernel: GPT:9289727 != 19775487 Sep 9 05:29:16.244738 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:29:16.244803 kernel: GPT:9289727 != 19775487 Sep 9 05:29:16.244819 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:29:16.244841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:29:16.261710 kernel: AES CTR mode by8 optimization enabled Sep 9 05:29:16.280573 kernel: libata version 3.00 loaded. Sep 9 05:29:16.290734 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 05:29:16.291030 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 05:29:16.297427 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:29:16.315629 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 05:29:16.315668 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 05:29:16.315927 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 05:29:16.297802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:29:16.318933 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 05:29:16.320939 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:29:16.327371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:29:16.340039 kernel: scsi host0: ahci Sep 9 05:29:16.340311 kernel: scsi host1: ahci Sep 9 05:29:16.340517 kernel: scsi host2: ahci Sep 9 05:29:16.333189 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:29:16.344217 kernel: scsi host3: ahci Sep 9 05:29:16.344409 kernel: scsi host4: ahci Sep 9 05:29:16.344611 kernel: scsi host5: ahci Sep 9 05:29:16.346086 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 1 Sep 9 05:29:16.346110 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 1 Sep 9 05:29:16.347930 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 1 Sep 9 05:29:16.347958 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 1 Sep 9 05:29:16.350559 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 1 Sep 9 05:29:16.350588 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 1 Sep 9 05:29:16.384563 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:29:16.419360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:29:16.437197 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:29:16.452209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:29:16.462036 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:29:16.488439 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:29:16.493570 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:29:16.658107 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 05:29:16.658188 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 05:29:16.658215 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 05:29:16.658225 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 05:29:16.659577 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 05:29:16.660568 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 05:29:16.660593 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:29:16.661765 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 05:29:16.661786 kernel: ata3.00: applying bridge limits Sep 9 05:29:16.662967 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:29:16.663002 kernel: ata3.00: configured for UDMA/100 Sep 9 05:29:16.684575 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 05:29:16.732584 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 05:29:16.732889 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 05:29:16.745468 disk-uuid[637]: Primary Header is updated. Sep 9 05:29:16.745468 disk-uuid[637]: Secondary Entries is updated. Sep 9 05:29:16.745468 disk-uuid[637]: Secondary Header is updated. Sep 9 05:29:16.757121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:29:16.759600 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 05:29:17.187480 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:29:17.189507 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:29:17.194076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:29:17.195421 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:29:17.198050 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:29:17.235591 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:29:17.789575 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:29:17.789909 disk-uuid[638]: The operation has completed successfully. Sep 9 05:29:17.819011 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:29:17.819171 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:29:17.866182 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:29:17.897836 sh[667]: Success Sep 9 05:29:17.917831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:29:17.917915 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:29:17.935418 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:29:17.976586 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 05:29:18.010410 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:29:18.013138 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:29:18.037120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:29:18.062679 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (679) Sep 9 05:29:18.062719 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:29:18.062731 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:29:18.066580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:29:18.066624 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:29:18.068756 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:29:18.070523 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:29:18.072117 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:29:18.073188 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:29:18.075275 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:29:18.133613 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 9 05:29:18.133699 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:29:18.136184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:29:18.140601 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:29:18.140632 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:29:18.147568 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:29:18.149849 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:29:18.151687 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:29:18.265234 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:29:18.270774 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:29:18.295709 ignition[775]: Ignition 2.22.0 Sep 9 05:29:18.295722 ignition[775]: Stage: fetch-offline Sep 9 05:29:18.295769 ignition[775]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:29:18.295780 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:29:18.295905 ignition[775]: parsed url from cmdline: "" Sep 9 05:29:18.295910 ignition[775]: no config URL provided Sep 9 05:29:18.295917 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:29:18.295941 ignition[775]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:29:18.295968 ignition[775]: op(1): [started] loading QEMU firmware config module Sep 9 05:29:18.295973 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 05:29:18.305383 ignition[775]: op(1): [finished] loading QEMU firmware config module Sep 9 05:29:18.338953 systemd-networkd[853]: lo: Link UP Sep 9 05:29:18.338964 systemd-networkd[853]: lo: Gained carrier Sep 9 05:29:18.340656 systemd-networkd[853]: Enumeration completed Sep 9 05:29:18.340762 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:29:18.344090 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:29:18.344095 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:29:18.348920 systemd-networkd[853]: eth0: Link UP Sep 9 05:29:18.349169 systemd-networkd[853]: eth0: Gained carrier Sep 9 05:29:18.349182 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:29:18.350916 systemd[1]: Reached target network.target - Network. Sep 9 05:29:18.361645 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:29:18.380804 ignition[775]: parsing config with SHA512: 167546cb4825d1524e19731db84e12eb98bf092dfa2c95a3cc84c6eded264ee9c1c7d4e02b018bafbe6684b9dcbe0343ba2a196a1f946e50ecf71a3d2522becd Sep 9 05:29:18.384906 unknown[775]: fetched base config from "system" Sep 9 05:29:18.384922 unknown[775]: fetched user config from "qemu" Sep 9 05:29:18.385341 ignition[775]: fetch-offline: fetch-offline passed Sep 9 05:29:18.385428 ignition[775]: Ignition finished successfully Sep 9 05:29:18.390498 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:29:18.392651 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 05:29:18.393822 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:29:18.499798 ignition[861]: Ignition 2.22.0 Sep 9 05:29:18.499815 ignition[861]: Stage: kargs Sep 9 05:29:18.499996 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:29:18.500007 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:29:18.504077 ignition[861]: kargs: kargs passed Sep 9 05:29:18.504146 ignition[861]: Ignition finished successfully Sep 9 05:29:18.509875 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:29:18.512666 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:29:18.559491 ignition[868]: Ignition 2.22.0 Sep 9 05:29:18.559506 ignition[868]: Stage: disks Sep 9 05:29:18.559788 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:29:18.559805 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:29:18.561038 ignition[868]: disks: disks passed Sep 9 05:29:18.561089 ignition[868]: Ignition finished successfully Sep 9 05:29:18.571076 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:29:18.572433 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:29:18.574479 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:29:18.574897 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:29:18.578159 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:29:18.580226 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:29:18.582014 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:29:18.625556 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:29:18.963727 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:29:18.968333 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:29:19.119590 kernel: EXT4-fs (vda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:29:19.121006 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:29:19.123383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:29:19.127143 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:29:19.130111 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:29:19.132418 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:29:19.132480 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:29:19.135224 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:29:19.184170 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:29:19.188468 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:29:19.189581 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 9 05:29:19.192104 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:29:19.192134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:29:19.195948 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:29:19.195992 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:29:19.197881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:29:19.238921 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:29:19.246608 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:29:19.250892 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:29:19.255431 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:29:19.384130 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:29:19.387718 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:29:19.390314 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:29:19.415254 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:29:19.416709 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:29:19.428657 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:29:19.454301 ignition[1000]: INFO : Ignition 2.22.0 Sep 9 05:29:19.454301 ignition[1000]: INFO : Stage: mount Sep 9 05:29:19.456559 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:29:19.456559 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:29:19.456559 ignition[1000]: INFO : mount: mount passed Sep 9 05:29:19.456559 ignition[1000]: INFO : Ignition finished successfully Sep 9 05:29:19.458775 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:29:19.461079 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:29:20.122768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:29:20.202314 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 9 05:29:20.202371 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:29:20.202383 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:29:20.206639 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:29:20.206714 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:29:20.208274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:29:20.250287 ignition[1030]: INFO : Ignition 2.22.0 Sep 9 05:29:20.250287 ignition[1030]: INFO : Stage: files Sep 9 05:29:20.252295 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:29:20.252295 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:29:20.252295 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:29:20.252295 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:29:20.252295 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:29:20.259109 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:29:20.259109 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:29:20.259109 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:29:20.259109 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:29:20.259109 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 05:29:20.255683 unknown[1030]: wrote ssh authorized keys file for user: core Sep 9 05:29:20.267673 systemd-networkd[853]: eth0: Gained IPv6LL Sep 9 05:29:20.294820 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:29:20.622984 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:29:20.624933 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:29:20.624933 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:29:20.628842 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:29:20.628842 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:29:20.628842 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:29:20.628842 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:29:20.628842 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:29:20.628842 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:29:20.821949 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:29:20.824943 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:29:20.824943 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:29:20.902283 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:29:20.902283 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:29:20.907893 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 05:29:21.247139 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 05:29:22.484406 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:29:22.484406 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 05:29:22.489170 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:29:22.495343 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:29:22.495343 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 05:29:22.495343 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 05:29:22.500676 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:29:22.500676 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:29:22.500676 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 05:29:22.500676 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 05:29:22.538283 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:29:22.546637 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:29:22.548686 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 05:29:22.548686 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:29:22.548686 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:29:22.548686 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:29:22.548686 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:29:22.548686 ignition[1030]: INFO : files: files passed Sep 9 05:29:22.548686 ignition[1030]: INFO : Ignition finished successfully Sep 9 05:29:22.555349 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:29:22.557210 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:29:22.562661 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:29:22.581688 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:29:22.581875 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:29:22.586960 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 05:29:22.592116 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:29:22.594139 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:29:22.596115 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:29:22.599660 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:29:22.603595 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:29:22.606431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:29:22.702016 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:29:22.702165 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:29:22.704577 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:29:22.706806 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:29:22.708740 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:29:22.709764 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:29:22.758241 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:29:22.762711 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:29:22.791010 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:29:22.793357 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:29:22.795731 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:29:22.797705 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:29:22.798758 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:29:22.801367 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:29:22.803535 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:29:22.805369 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:29:22.807636 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:29:22.809977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:29:22.812195 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:29:22.814363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:29:22.816442 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:29:22.818883 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:29:22.820975 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:29:22.823136 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:29:22.824858 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:29:22.825908 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:29:22.828270 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:29:22.830805 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:29:22.833307 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:29:22.834264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:29:22.836990 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:29:22.838134 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:29:22.840486 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:29:22.841594 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:29:22.843995 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:29:22.845818 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:29:22.847639 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:29:22.850623 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:29:22.852423 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:29:22.852618 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:29:22.852724 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:29:22.855284 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:29:22.855366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:29:22.856152 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:29:22.856274 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:29:22.858085 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:29:22.858196 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:29:22.861024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:29:22.862555 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:29:22.865755 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:29:22.865901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:29:22.867966 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:29:22.868089 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:29:22.877391 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:29:22.877764 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:29:22.903686 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:29:22.919127 ignition[1086]: INFO : Ignition 2.22.0 Sep 9 05:29:22.919127 ignition[1086]: INFO : Stage: umount Sep 9 05:29:22.921399 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:29:22.921399 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:29:22.924052 ignition[1086]: INFO : umount: umount passed Sep 9 05:29:22.924052 ignition[1086]: INFO : Ignition finished successfully Sep 9 05:29:22.926716 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:29:22.926932 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:29:22.929277 systemd[1]: Stopped target network.target - Network. Sep 9 05:29:22.929997 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:29:22.930074 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:29:22.931776 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:29:22.931860 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:29:22.932137 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:29:22.932202 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:29:22.932459 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:29:22.932534 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:29:22.933144 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:29:22.941716 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:29:22.954913 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:29:22.955102 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:29:22.961146 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:29:22.961582 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:29:22.961801 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:29:22.966469 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:29:22.967836 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:29:22.968433 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:29:22.968480 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:29:22.973387 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:29:22.973668 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:29:22.973732 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:29:22.974412 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:29:22.974536 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:29:22.982057 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:29:22.982112 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:29:22.983095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:29:22.983151 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:29:22.987774 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:29:22.991365 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:29:22.991458 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:29:23.013484 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:29:23.013745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:29:23.073731 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:29:23.073904 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:29:23.076327 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:29:23.076478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:29:23.079327 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:29:23.079399 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:29:23.081963 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:29:23.082054 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:29:23.087853 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:29:23.087975 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:29:23.091105 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:29:23.091194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:29:23.095812 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:29:23.099744 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:29:23.099825 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:29:23.102571 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:29:23.102651 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:29:23.107994 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 05:29:23.108051 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:29:23.132583 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:29:23.132686 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:29:23.134992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:29:23.135061 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:29:23.140491 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:29:23.140617 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 05:29:23.140682 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:29:23.140745 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:29:23.161556 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:29:23.161724 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:29:23.519316 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:29:23.519479 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:29:23.567855 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:29:23.569406 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:29:23.569481 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:29:23.572521 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:29:23.593894 systemd[1]: Switching root. Sep 9 05:29:23.678519 systemd-journald[219]: Journal stopped Sep 9 05:29:25.507722 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 9 05:29:25.507849 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:29:25.507870 kernel: SELinux: policy capability open_perms=1 Sep 9 05:29:25.507885 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:29:25.507899 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:29:25.507914 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:29:25.507929 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:29:25.507954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:29:25.507970 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:29:25.507984 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:29:25.508008 kernel: audit: type=1403 audit(1757395764.538:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:29:25.508025 systemd[1]: Successfully loaded SELinux policy in 77.179ms. Sep 9 05:29:25.508059 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.400ms. Sep 9 05:29:25.508077 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:29:25.508095 systemd[1]: Detected virtualization kvm. Sep 9 05:29:25.508111 systemd[1]: Detected architecture x86-64. Sep 9 05:29:25.508132 systemd[1]: Detected first boot. Sep 9 05:29:25.508148 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:29:25.508165 zram_generator::config[1132]: No configuration found. Sep 9 05:29:25.508183 kernel: Guest personality initialized and is inactive Sep 9 05:29:25.508201 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:29:25.508217 kernel: Initialized host personality Sep 9 05:29:25.508239 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:29:25.508255 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:29:25.508273 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:29:25.508294 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:29:25.508311 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:29:25.508328 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:29:25.508345 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:29:25.508362 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:29:25.508378 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:29:25.508395 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:29:25.508412 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:29:25.508433 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:29:25.508449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:29:25.508465 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:29:25.508483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:29:25.508500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:29:25.508517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:29:25.508534 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:29:25.508583 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:29:25.508609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:29:25.508626 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:29:25.508643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:29:25.508659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:29:25.508677 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:29:25.508693 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:29:25.508710 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:29:25.508726 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:29:25.508753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:29:25.508774 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:29:25.508792 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:29:25.508808 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:29:25.508824 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:29:25.508840 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:29:25.508856 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:29:25.508871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:29:25.508888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:29:25.508904 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:29:25.508923 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:29:25.508940 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:29:25.508957 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:29:25.508972 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:29:25.508989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:29:25.509005 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:29:25.509021 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:29:25.509046 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:29:25.509064 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:29:25.509089 systemd[1]: Reached target machines.target - Containers. Sep 9 05:29:25.509107 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:29:25.509123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:29:25.509140 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:29:25.509160 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:29:25.509177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:29:25.509193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:29:25.509209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:29:25.509233 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:29:25.509249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:29:25.509266 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:29:25.509283 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:29:25.509300 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:29:25.509316 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:29:25.509333 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:29:25.509350 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:29:25.509375 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:29:25.509392 kernel: loop: module loaded Sep 9 05:29:25.509409 kernel: fuse: init (API version 7.41) Sep 9 05:29:25.509609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:29:25.509650 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:29:25.509668 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:29:25.509685 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:29:25.509702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:29:25.509729 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:29:25.509756 systemd[1]: Stopped verity-setup.service. Sep 9 05:29:25.509773 kernel: ACPI: bus type drm_connector registered Sep 9 05:29:25.509801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:29:25.509818 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:29:25.509834 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:29:25.509850 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:29:25.509866 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:29:25.509883 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:29:25.509929 systemd-journald[1204]: Collecting audit messages is disabled. Sep 9 05:29:25.509967 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:29:25.509983 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:29:25.510007 systemd-journald[1204]: Journal started Sep 9 05:29:25.510036 systemd-journald[1204]: Runtime Journal (/run/log/journal/1d7f97fa6fb44edeb93ba4f077ac7f47) is 6M, max 48.2M, 42.2M free. Sep 9 05:29:25.213674 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:29:25.241512 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:29:25.242083 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:29:25.513784 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:29:25.514910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:29:25.516647 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:29:25.516895 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:29:25.518511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:29:25.518768 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:29:25.520201 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:29:25.520430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:29:25.521782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:29:25.522012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:29:25.523909 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:29:25.524234 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:29:25.525865 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:29:25.526158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:29:25.527857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:29:25.529461 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:29:25.531346 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:29:25.533058 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:29:25.548445 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:29:25.551137 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:29:25.553748 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:29:25.554977 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:29:25.555015 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:29:25.557401 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:29:25.563926 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:29:25.565159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:29:25.569696 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:29:25.572917 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:29:25.574118 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:29:25.576069 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:29:25.577363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:29:25.578790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:29:25.583692 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:29:25.588840 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:29:25.592229 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:29:25.595518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:29:25.602784 systemd-journald[1204]: Time spent on flushing to /var/log/journal/1d7f97fa6fb44edeb93ba4f077ac7f47 is 21.301ms for 1044 entries. Sep 9 05:29:25.602784 systemd-journald[1204]: System Journal (/var/log/journal/1d7f97fa6fb44edeb93ba4f077ac7f47) is 8M, max 195.6M, 187.6M free. Sep 9 05:29:25.633572 systemd-journald[1204]: Received client request to flush runtime journal. Sep 9 05:29:25.633622 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 05:29:25.605805 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:29:25.608436 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:29:25.612312 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:29:25.616629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:29:25.635519 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:29:25.636755 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 9 05:29:25.636773 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Sep 9 05:29:25.637669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:29:25.643466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:29:25.650879 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:29:25.676993 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:29:25.684575 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:29:25.708244 kernel: loop1: detected capacity change from 0 to 110984 Sep 9 05:29:25.709317 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:29:25.712314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:29:25.746072 kernel: loop2: detected capacity change from 0 to 224512 Sep 9 05:29:25.749251 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 9 05:29:25.749749 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Sep 9 05:29:25.756092 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:29:25.850587 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 05:29:25.873746 kernel: loop4: detected capacity change from 0 to 110984 Sep 9 05:29:25.896601 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 05:29:25.908939 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 05:29:25.910030 (sd-merge)[1276]: Merged extensions into '/usr'. Sep 9 05:29:25.916084 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:29:25.916102 systemd[1]: Reloading... Sep 9 05:29:26.079625 zram_generator::config[1302]: No configuration found. Sep 9 05:29:26.200565 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:29:26.613362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:29:26.613970 systemd[1]: Reloading finished in 697 ms. Sep 9 05:29:26.641353 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:29:26.646314 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:29:26.669049 systemd[1]: Starting ensure-sysext.service... Sep 9 05:29:26.672363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:29:26.690050 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:29:26.690072 systemd[1]: Reloading... Sep 9 05:29:26.696003 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:29:26.696062 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:29:26.696486 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:29:26.696907 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:29:26.698126 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:29:26.698504 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 05:29:26.698623 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 9 05:29:26.703696 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:29:26.703718 systemd-tmpfiles[1340]: Skipping /boot Sep 9 05:29:26.715162 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:29:26.715184 systemd-tmpfiles[1340]: Skipping /boot Sep 9 05:29:26.772601 zram_generator::config[1367]: No configuration found. Sep 9 05:29:26.982494 systemd[1]: Reloading finished in 291 ms. Sep 9 05:29:27.005684 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:29:27.031904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:29:27.041911 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:29:27.045202 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:29:27.051585 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:29:27.065737 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:29:27.073531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:29:27.078904 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:29:27.083487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:29:27.083710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:29:27.232356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:29:27.237974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:29:27.243597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:29:27.245177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:29:27.245349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:29:27.247944 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:29:27.249267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:29:27.252267 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:29:27.254914 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:29:27.257331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:29:27.257814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:29:27.259802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:29:27.260099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:29:27.261928 augenrules[1435]: No rules Sep 9 05:29:27.262592 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:29:27.262891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:29:27.265149 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:29:27.265620 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:29:27.268236 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Sep 9 05:29:27.279964 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:29:27.287271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:29:27.289630 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:29:27.292520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:29:27.301327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:29:27.305843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:29:27.308734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:29:27.321837 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:29:27.323921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:29:27.324078 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:29:27.327130 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:29:27.328260 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:29:27.328359 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:29:27.329899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:29:27.333097 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:29:27.335171 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:29:27.335405 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:29:27.337760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:29:27.338005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:29:27.339796 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:29:27.340039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:29:27.351700 systemd[1]: Finished ensure-sysext.service. Sep 9 05:29:27.356723 augenrules[1447]: /sbin/augenrules: No change Sep 9 05:29:27.358060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:29:27.358328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:29:27.366460 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:29:27.371787 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:29:27.373684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:29:27.373816 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:29:27.377875 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:29:27.380292 augenrules[1508]: No rules Sep 9 05:29:27.384606 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:29:27.386009 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:29:27.418414 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:29:27.862612 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 05:29:27.868653 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:29:27.870833 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:29:27.874192 systemd-networkd[1505]: lo: Link UP Sep 9 05:29:27.874207 systemd-networkd[1505]: lo: Gained carrier Sep 9 05:29:27.878419 systemd-networkd[1505]: Enumeration completed Sep 9 05:29:27.878913 systemd-resolved[1409]: Positive Trust Anchors: Sep 9 05:29:27.878937 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:29:27.878969 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:29:27.883353 systemd-networkd[1505]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:29:27.883847 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:29:27.883897 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:29:27.883818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:29:27.884012 systemd-networkd[1505]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:29:27.886350 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:29:27.886905 systemd-networkd[1505]: eth0: Link UP Sep 9 05:29:27.888704 systemd-networkd[1505]: eth0: Gained carrier Sep 9 05:29:27.888821 systemd-networkd[1505]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:29:27.889522 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 9 05:29:27.891869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:29:27.894837 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:29:27.902647 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 05:29:27.903010 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 05:29:27.903232 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 05:29:27.902909 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:29:27.904503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:29:27.906085 systemd[1]: Reached target network.target - Network. Sep 9 05:29:27.907406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:29:27.909703 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:29:27.911189 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:29:27.912795 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:29:27.914641 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:29:27.916291 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:29:27.917781 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:29:27.919630 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:29:27.921188 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:29:27.921239 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:29:27.922390 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:29:27.924583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:29:27.925919 systemd-networkd[1505]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:29:27.926924 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:29:27.932618 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:29:27.933846 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. Sep 9 05:29:27.934357 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:29:27.936346 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:29:27.938780 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 05:29:27.938884 systemd-timesyncd[1510]: Initial clock synchronization to Tue 2025-09-09 05:29:27.954066 UTC. Sep 9 05:29:27.944712 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:29:27.945244 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:29:27.947045 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:29:27.951432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:29:27.957341 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:29:27.958660 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:29:27.959941 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:29:27.959994 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:29:27.962682 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:29:27.969078 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:29:27.973909 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:29:27.975796 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:29:27.984887 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:29:27.986193 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:29:27.988826 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:29:27.992345 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:29:27.992750 jq[1550]: false Sep 9 05:29:28.002835 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:29:28.005754 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:29:28.010822 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:29:28.017942 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:29:28.020673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:29:28.028100 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:29:28.028805 extend-filesystems[1551]: Found /dev/vda6 Sep 9 05:29:28.031886 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:29:28.035848 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache Sep 9 05:29:28.030484 oslogin_cache_refresh[1552]: Refreshing passwd entry cache Sep 9 05:29:28.038828 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:29:28.041884 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:29:28.045868 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:29:28.046374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:29:28.047923 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:29:28.051405 extend-filesystems[1551]: Found /dev/vda9 Sep 9 05:29:28.051280 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:29:28.053651 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting Sep 9 05:29:28.053651 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:29:28.053651 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache Sep 9 05:29:28.052961 oslogin_cache_refresh[1552]: Failure getting users, quitting Sep 9 05:29:28.052990 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:29:28.053090 oslogin_cache_refresh[1552]: Refreshing group entry cache Sep 9 05:29:28.071374 extend-filesystems[1551]: Checking size of /dev/vda9 Sep 9 05:29:28.073926 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting Sep 9 05:29:28.073926 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:29:28.072720 oslogin_cache_refresh[1552]: Failure getting groups, quitting Sep 9 05:29:28.072735 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:29:28.076391 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:29:28.078842 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:29:28.079240 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:29:28.085167 jq[1564]: true Sep 9 05:29:28.136179 extend-filesystems[1551]: Resized partition /dev/vda9 Sep 9 05:29:28.141786 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:29:28.144493 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:29:28.146369 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:29:28.147281 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:29:28.153874 jq[1580]: true Sep 9 05:29:28.155339 update_engine[1562]: I20250909 05:29:28.155249 1562 main.cc:92] Flatcar Update Engine starting Sep 9 05:29:28.201075 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 05:29:28.251870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:29:28.274155 tar[1570]: linux-amd64/LICENSE Sep 9 05:29:28.274642 tar[1570]: linux-amd64/helm Sep 9 05:29:28.281202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:29:28.281480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:29:28.285897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:29:28.300425 dbus-daemon[1548]: [system] SELinux support is enabled Sep 9 05:29:28.347070 update_engine[1562]: I20250909 05:29:28.328984 1562 update_check_scheduler.cc:74] Next update check in 5m23s Sep 9 05:29:28.301762 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:29:28.306110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:29:28.306134 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:29:28.307684 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:29:28.307711 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:29:28.323757 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:29:28.331649 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:29:28.357441 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 05:29:28.828834 kernel: kvm_amd: TSC scaling supported Sep 9 05:29:28.828939 kernel: kvm_amd: Nested Virtualization enabled Sep 9 05:29:28.828963 kernel: kvm_amd: Nested Paging enabled Sep 9 05:29:28.828980 kernel: kvm_amd: LBR virtualization supported Sep 9 05:29:28.829011 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 05:29:28.829046 kernel: kvm_amd: Virtual GIF supported Sep 9 05:29:28.829093 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:29:28.541785 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:29:28.798894 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:29:28.818453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:29:28.831370 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:29:28.831370 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:29:28.831370 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 05:29:28.852493 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Sep 9 05:29:28.840108 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:29:28.857056 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:29:28.840557 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:29:28.840665 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:29:28.840695 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:29:28.841157 systemd-logind[1560]: New seat seat0. Sep 9 05:29:28.850208 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:29:28.863272 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:29:28.868735 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:29:28.878451 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:29:28.885444 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:29:28.889491 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:36136.service - OpenSSH per-connection server daemon (10.0.0.1:36136). Sep 9 05:29:28.893615 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:29:28.904369 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:29:28.909928 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:29:28.914384 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:29:28.996466 sshd[1646]: Access denied for user core by PAM account configuration [preauth] Sep 9 05:29:29.000527 systemd[1]: sshd@0-10.0.0.34:22-10.0.0.1:36136.service: Deactivated successfully. Sep 9 05:29:29.190090 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:29:29.191502 containerd[1581]: time="2025-09-09T05:29:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:29:29.192599 containerd[1581]: time="2025-09-09T05:29:29.192566209Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:29:29.199643 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:29:29.205976 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:29:29.207740 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:29:29.240050 containerd[1581]: time="2025-09-09T05:29:29.239972717Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.405µs" Sep 9 05:29:29.240262 containerd[1581]: time="2025-09-09T05:29:29.240222513Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:29:29.240357 containerd[1581]: time="2025-09-09T05:29:29.240337552Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:29:29.240756 containerd[1581]: time="2025-09-09T05:29:29.240731375Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:29:29.240859 containerd[1581]: time="2025-09-09T05:29:29.240839227Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:29:29.241000 containerd[1581]: time="2025-09-09T05:29:29.240979536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:29:29.241258 containerd[1581]: time="2025-09-09T05:29:29.241216892Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:29:29.241364 containerd[1581]: time="2025-09-09T05:29:29.241341614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:29:29.241985 containerd[1581]: time="2025-09-09T05:29:29.241950489Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:29:29.242065 containerd[1581]: time="2025-09-09T05:29:29.242046525Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:29:29.242169 containerd[1581]: time="2025-09-09T05:29:29.242138871Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:29:29.242298 containerd[1581]: time="2025-09-09T05:29:29.242276282Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:29:29.243060 containerd[1581]: time="2025-09-09T05:29:29.243031742Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:29:29.243568 containerd[1581]: time="2025-09-09T05:29:29.243520527Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:29:29.243746 containerd[1581]: time="2025-09-09T05:29:29.243709399Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:29:29.243850 containerd[1581]: time="2025-09-09T05:29:29.243830503Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:29:29.244284 containerd[1581]: time="2025-09-09T05:29:29.244256992Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:29:29.244863 containerd[1581]: time="2025-09-09T05:29:29.244835026Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:29:29.245146 containerd[1581]: time="2025-09-09T05:29:29.245113768Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:29:29.253086 containerd[1581]: time="2025-09-09T05:29:29.252976073Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:29:29.253086 containerd[1581]: time="2025-09-09T05:29:29.253040503Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:29:29.253086 containerd[1581]: time="2025-09-09T05:29:29.253062154Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:29:29.253086 containerd[1581]: time="2025-09-09T05:29:29.253084115Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:29:29.253219 containerd[1581]: time="2025-09-09T05:29:29.253100404Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:29:29.253219 containerd[1581]: time="2025-09-09T05:29:29.253130053Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:29:29.253219 containerd[1581]: time="2025-09-09T05:29:29.253157718Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:29:29.253219 containerd[1581]: time="2025-09-09T05:29:29.253190004Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:29:29.253219 containerd[1581]: time="2025-09-09T05:29:29.253212125Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:29:29.253370 containerd[1581]: time="2025-09-09T05:29:29.253226067Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:29:29.253370 containerd[1581]: time="2025-09-09T05:29:29.253248530Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:29:29.253370 containerd[1581]: time="2025-09-09T05:29:29.253271164Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:29:29.253552 containerd[1581]: time="2025-09-09T05:29:29.253445081Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:29:29.253552 containerd[1581]: time="2025-09-09T05:29:29.253483361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:29:29.253552 containerd[1581]: time="2025-09-09T05:29:29.253510795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:29:29.253552 containerd[1581]: time="2025-09-09T05:29:29.253531122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:29:29.253678 containerd[1581]: time="2025-09-09T05:29:29.253567888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:29:29.253678 containerd[1581]: time="2025-09-09T05:29:29.253584157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:29:29.253678 containerd[1581]: time="2025-09-09T05:29:29.253602239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:29:29.253678 containerd[1581]: time="2025-09-09T05:29:29.253618487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:29:29.253678 containerd[1581]: time="2025-09-09T05:29:29.253633202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:29:29.253678 containerd[1581]: time="2025-09-09T05:29:29.253655343Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:29:29.253990 containerd[1581]: time="2025-09-09T05:29:29.253683719Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:29:29.253990 containerd[1581]: time="2025-09-09T05:29:29.253859260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:29:29.253990 containerd[1581]: time="2025-09-09T05:29:29.253890815Z" level=info msg="Start snapshots syncer" Sep 9 05:29:29.253990 containerd[1581]: time="2025-09-09T05:29:29.253946715Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:29:29.254497 containerd[1581]: time="2025-09-09T05:29:29.254423302Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:29:29.254827 containerd[1581]: time="2025-09-09T05:29:29.254520860Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:29:29.254827 containerd[1581]: time="2025-09-09T05:29:29.254679983Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:29:29.254953 containerd[1581]: time="2025-09-09T05:29:29.254830816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:29:29.254953 containerd[1581]: time="2025-09-09T05:29:29.254890877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:29:29.254953 containerd[1581]: time="2025-09-09T05:29:29.254917940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:29:29.254953 containerd[1581]: time="2025-09-09T05:29:29.254949735Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:29:29.255087 containerd[1581]: time="2025-09-09T05:29:29.254982281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:29:29.255087 containerd[1581]: time="2025-09-09T05:29:29.255000975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:29:29.255087 containerd[1581]: time="2025-09-09T05:29:29.255014667Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:29:29.255087 containerd[1581]: time="2025-09-09T05:29:29.255049860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:29:29.255228 containerd[1581]: time="2025-09-09T05:29:29.255098182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:29:29.255228 containerd[1581]: time="2025-09-09T05:29:29.255127030Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:29:29.255228 containerd[1581]: time="2025-09-09T05:29:29.255171143Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:29:29.255228 containerd[1581]: time="2025-09-09T05:29:29.255195751Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:29:29.255228 containerd[1581]: time="2025-09-09T05:29:29.255207458Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255249016Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255262988Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255282774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255309538Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255361018Z" level=info msg="runtime interface created" Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255379160Z" level=info msg="created NRI interface" Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255394978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:29:29.255424 containerd[1581]: time="2025-09-09T05:29:29.255409883Z" level=info msg="Connect containerd service" Sep 9 05:29:29.255806 containerd[1581]: time="2025-09-09T05:29:29.255453454Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:29:29.256953 containerd[1581]: time="2025-09-09T05:29:29.256907671Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:29:29.290788 systemd-networkd[1505]: eth0: Gained IPv6LL Sep 9 05:29:29.295140 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:29:29.298338 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:29:29.303028 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 05:29:29.307181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:29:29.311830 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:29:29.395266 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:29:29.421628 tar[1570]: linux-amd64/README.md Sep 9 05:29:29.427864 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 05:29:29.428196 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 05:29:29.452531 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:29:29.480655 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:29:29.515776 containerd[1581]: time="2025-09-09T05:29:29.515721268Z" level=info msg="Start subscribing containerd event" Sep 9 05:29:29.515917 containerd[1581]: time="2025-09-09T05:29:29.515781930Z" level=info msg="Start recovering state" Sep 9 05:29:29.516262 containerd[1581]: time="2025-09-09T05:29:29.516237708Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:29:29.516489 containerd[1581]: time="2025-09-09T05:29:29.516465822Z" level=info msg="Start event monitor" Sep 9 05:29:29.516618 containerd[1581]: time="2025-09-09T05:29:29.516529922Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:29:29.516618 containerd[1581]: time="2025-09-09T05:29:29.516553077Z" level=info msg="Start streaming server" Sep 9 05:29:29.516618 containerd[1581]: time="2025-09-09T05:29:29.516565205Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:29:29.516618 containerd[1581]: time="2025-09-09T05:29:29.516577794Z" level=info msg="runtime interface starting up..." Sep 9 05:29:29.516618 containerd[1581]: time="2025-09-09T05:29:29.516586414Z" level=info msg="starting plugins..." Sep 9 05:29:29.516618 containerd[1581]: time="2025-09-09T05:29:29.516607915Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:29:29.518046 containerd[1581]: time="2025-09-09T05:29:29.518025043Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:29:29.518179 containerd[1581]: time="2025-09-09T05:29:29.518154197Z" level=info msg="containerd successfully booted in 0.327782s" Sep 9 05:29:29.518257 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:29:30.902329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:29:30.904915 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:29:30.906731 systemd[1]: Startup finished in 4.033s (kernel) + 9.824s (initrd) + 6.444s (userspace) = 20.302s. Sep 9 05:29:30.915134 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:29:31.666418 kubelet[1701]: E0909 05:29:31.666327 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:29:31.670507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:29:31.670801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:29:31.671243 systemd[1]: kubelet.service: Consumed 1.839s CPU time, 266.1M memory peak. Sep 9 05:29:39.196424 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Sep 9 05:29:39.261400 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:39.264870 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:39.274213 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:29:39.276422 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:29:39.284596 systemd-logind[1560]: New session 1 of user core. Sep 9 05:29:39.306374 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:29:39.314324 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:29:39.332982 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:29:39.337450 systemd-logind[1560]: New session c1 of user core. Sep 9 05:29:39.513947 systemd[1719]: Queued start job for default target default.target. Sep 9 05:29:39.528194 systemd[1719]: Created slice app.slice - User Application Slice. Sep 9 05:29:39.528226 systemd[1719]: Reached target paths.target - Paths. Sep 9 05:29:39.528275 systemd[1719]: Reached target timers.target - Timers. Sep 9 05:29:39.530051 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:29:39.547385 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:29:39.547708 systemd[1719]: Reached target sockets.target - Sockets. Sep 9 05:29:39.547779 systemd[1719]: Reached target basic.target - Basic System. Sep 9 05:29:39.547821 systemd[1719]: Reached target default.target - Main User Target. Sep 9 05:29:39.547863 systemd[1719]: Startup finished in 200ms. Sep 9 05:29:39.548009 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:29:39.549836 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:29:39.618246 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:52388.service - OpenSSH per-connection server daemon (10.0.0.1:52388). Sep 9 05:29:39.687305 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 52388 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:39.689425 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:39.695968 systemd-logind[1560]: New session 2 of user core. Sep 9 05:29:39.706969 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:29:40.305488 sshd[1733]: Connection closed by 10.0.0.1 port 52388 Sep 9 05:29:40.306256 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:40.316768 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:52388.service: Deactivated successfully. Sep 9 05:29:40.319377 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:29:40.320328 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:29:40.323776 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:34128.service - OpenSSH per-connection server daemon (10.0.0.1:34128). Sep 9 05:29:40.324397 systemd-logind[1560]: Removed session 2. Sep 9 05:29:40.388450 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:40.390083 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:40.394986 systemd-logind[1560]: New session 3 of user core. Sep 9 05:29:40.409687 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:29:40.459493 sshd[1742]: Connection closed by 10.0.0.1 port 34128 Sep 9 05:29:40.459918 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:40.467887 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:34128.service: Deactivated successfully. Sep 9 05:29:40.469825 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:29:40.470685 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:29:40.473499 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:34140.service - OpenSSH per-connection server daemon (10.0.0.1:34140). Sep 9 05:29:40.474405 systemd-logind[1560]: Removed session 3. Sep 9 05:29:40.530212 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 34140 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:40.531645 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:40.536068 systemd-logind[1560]: New session 4 of user core. Sep 9 05:29:40.545672 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:29:40.599464 sshd[1751]: Connection closed by 10.0.0.1 port 34140 Sep 9 05:29:40.600023 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:40.609211 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:34140.service: Deactivated successfully. Sep 9 05:29:40.611201 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:29:40.612139 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:29:40.615184 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Sep 9 05:29:40.615977 systemd-logind[1560]: Removed session 4. Sep 9 05:29:40.673404 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:40.675591 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:40.681068 systemd-logind[1560]: New session 5 of user core. Sep 9 05:29:40.694959 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:29:40.757199 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:29:40.757623 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:29:40.781415 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 9 05:29:40.783528 sshd[1760]: Connection closed by 10.0.0.1 port 34152 Sep 9 05:29:40.784056 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:40.798099 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:34152.service: Deactivated successfully. Sep 9 05:29:40.800346 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:29:40.801229 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:29:40.805574 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:34168.service - OpenSSH per-connection server daemon (10.0.0.1:34168). Sep 9 05:29:40.806274 systemd-logind[1560]: Removed session 5. Sep 9 05:29:40.864570 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 34168 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:40.866249 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:40.870679 systemd-logind[1560]: New session 6 of user core. Sep 9 05:29:40.880710 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:29:40.935868 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:29:40.936193 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:29:41.052489 sudo[1772]: pam_unix(sudo:session): session closed for user root Sep 9 05:29:41.060323 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:29:41.060706 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:29:41.072868 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:29:41.129707 augenrules[1794]: No rules Sep 9 05:29:41.132035 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:29:41.132336 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:29:41.133959 sudo[1771]: pam_unix(sudo:session): session closed for user root Sep 9 05:29:41.136009 sshd[1770]: Connection closed by 10.0.0.1 port 34168 Sep 9 05:29:41.136522 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:41.145048 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:34168.service: Deactivated successfully. Sep 9 05:29:41.147033 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:29:41.148072 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:29:41.151023 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:34182.service - OpenSSH per-connection server daemon (10.0.0.1:34182). Sep 9 05:29:41.151675 systemd-logind[1560]: Removed session 6. Sep 9 05:29:41.224497 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 34182 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:41.226392 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:41.231607 systemd-logind[1560]: New session 7 of user core. Sep 9 05:29:41.245871 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:29:41.301348 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:29:41.301779 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:29:41.921512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:29:41.924764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:29:42.057904 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:29:42.083245 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:29:42.324216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:29:42.348361 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:29:42.429663 kubelet[1836]: E0909 05:29:42.429591 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:29:42.436996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:29:42.437238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:29:42.437725 systemd[1]: kubelet.service: Consumed 373ms CPU time, 110.4M memory peak. Sep 9 05:29:42.898529 dockerd[1830]: time="2025-09-09T05:29:42.898438595Z" level=info msg="Starting up" Sep 9 05:29:42.899346 dockerd[1830]: time="2025-09-09T05:29:42.899320763Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:29:42.920833 dockerd[1830]: time="2025-09-09T05:29:42.920759934Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:29:43.832958 dockerd[1830]: time="2025-09-09T05:29:43.832882567Z" level=info msg="Loading containers: start." Sep 9 05:29:43.849609 kernel: Initializing XFRM netlink socket Sep 9 05:29:44.517153 systemd-networkd[1505]: docker0: Link UP Sep 9 05:29:44.592435 dockerd[1830]: time="2025-09-09T05:29:44.592344878Z" level=info msg="Loading containers: done." Sep 9 05:29:44.609029 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1291134996-merged.mount: Deactivated successfully. Sep 9 05:29:44.715683 dockerd[1830]: time="2025-09-09T05:29:44.715061832Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:29:44.715973 dockerd[1830]: time="2025-09-09T05:29:44.715948431Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:29:44.716137 dockerd[1830]: time="2025-09-09T05:29:44.716105482Z" level=info msg="Initializing buildkit" Sep 9 05:29:44.764647 dockerd[1830]: time="2025-09-09T05:29:44.764561739Z" level=info msg="Completed buildkit initialization" Sep 9 05:29:44.772227 dockerd[1830]: time="2025-09-09T05:29:44.771247948Z" level=info msg="Daemon has completed initialization" Sep 9 05:29:44.772227 dockerd[1830]: time="2025-09-09T05:29:44.771337312Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:29:44.772654 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:29:45.998425 containerd[1581]: time="2025-09-09T05:29:45.998374621Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 05:29:47.695580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616222565.mount: Deactivated successfully. Sep 9 05:29:50.507845 containerd[1581]: time="2025-09-09T05:29:50.507759053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:50.522261 containerd[1581]: time="2025-09-09T05:29:50.522185237Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 05:29:50.545700 containerd[1581]: time="2025-09-09T05:29:50.545597187Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:50.555831 containerd[1581]: time="2025-09-09T05:29:50.555761808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:50.556878 containerd[1581]: time="2025-09-09T05:29:50.556809463Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 4.55838753s" Sep 9 05:29:50.556878 containerd[1581]: time="2025-09-09T05:29:50.556860912Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 05:29:50.557781 containerd[1581]: time="2025-09-09T05:29:50.557685496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 05:29:52.511866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:29:52.514025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:29:52.801599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:29:52.818899 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:29:53.777328 kubelet[2127]: E0909 05:29:53.777220 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:29:53.781439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:29:53.781715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:29:53.782191 systemd[1]: kubelet.service: Consumed 288ms CPU time, 111.3M memory peak. Sep 9 05:29:55.642866 containerd[1581]: time="2025-09-09T05:29:55.642766245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:55.668483 containerd[1581]: time="2025-09-09T05:29:55.668416177Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 05:29:55.684318 containerd[1581]: time="2025-09-09T05:29:55.684224950Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:55.704809 containerd[1581]: time="2025-09-09T05:29:55.704719580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:55.706169 containerd[1581]: time="2025-09-09T05:29:55.706093127Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 5.148373359s" Sep 9 05:29:55.706169 containerd[1581]: time="2025-09-09T05:29:55.706164845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 05:29:55.707298 containerd[1581]: time="2025-09-09T05:29:55.707252378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 05:29:59.482746 containerd[1581]: time="2025-09-09T05:29:59.482627948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:59.485524 containerd[1581]: time="2025-09-09T05:29:59.485479730Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 05:29:59.487058 containerd[1581]: time="2025-09-09T05:29:59.487011008Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:59.494024 containerd[1581]: time="2025-09-09T05:29:59.493948439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:29:59.495231 containerd[1581]: time="2025-09-09T05:29:59.495174319Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 3.787869504s" Sep 9 05:29:59.495295 containerd[1581]: time="2025-09-09T05:29:59.495246848Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 05:29:59.496104 containerd[1581]: time="2025-09-09T05:29:59.496047735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 05:30:01.115252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726682703.mount: Deactivated successfully. Sep 9 05:30:02.020500 containerd[1581]: time="2025-09-09T05:30:02.020383635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:02.021138 containerd[1581]: time="2025-09-09T05:30:02.021013349Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 05:30:02.022323 containerd[1581]: time="2025-09-09T05:30:02.022274311Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:02.024756 containerd[1581]: time="2025-09-09T05:30:02.024708765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:02.025274 containerd[1581]: time="2025-09-09T05:30:02.025208364Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.529109385s" Sep 9 05:30:02.025274 containerd[1581]: time="2025-09-09T05:30:02.025263537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 05:30:02.026184 containerd[1581]: time="2025-09-09T05:30:02.025833439Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:30:02.678925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474735596.mount: Deactivated successfully. Sep 9 05:30:03.760057 containerd[1581]: time="2025-09-09T05:30:03.759961485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:03.761558 containerd[1581]: time="2025-09-09T05:30:03.761467970Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 05:30:03.763308 containerd[1581]: time="2025-09-09T05:30:03.763259534Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:03.767469 containerd[1581]: time="2025-09-09T05:30:03.767399706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:03.768712 containerd[1581]: time="2025-09-09T05:30:03.768667886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.742800297s" Sep 9 05:30:03.768712 containerd[1581]: time="2025-09-09T05:30:03.768701555Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:30:03.769510 containerd[1581]: time="2025-09-09T05:30:03.769337769Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:30:04.011961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 05:30:04.014821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:30:04.699510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:30:04.703678 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:30:04.763631 kubelet[2211]: E0909 05:30:04.763505 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:30:04.767357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:30:04.767629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:30:04.768072 systemd[1]: kubelet.service: Consumed 271ms CPU time, 110.7M memory peak. Sep 9 05:30:05.053087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915997073.mount: Deactivated successfully. Sep 9 05:30:05.062967 containerd[1581]: time="2025-09-09T05:30:05.062859993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:30:05.064854 containerd[1581]: time="2025-09-09T05:30:05.064775692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 05:30:05.067860 containerd[1581]: time="2025-09-09T05:30:05.067658567Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:30:05.071693 containerd[1581]: time="2025-09-09T05:30:05.071625057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:30:05.072809 containerd[1581]: time="2025-09-09T05:30:05.072674963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.303291151s" Sep 9 05:30:05.072809 containerd[1581]: time="2025-09-09T05:30:05.072711998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:30:05.073342 containerd[1581]: time="2025-09-09T05:30:05.073292984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 05:30:06.522527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988006547.mount: Deactivated successfully. Sep 9 05:30:09.553727 containerd[1581]: time="2025-09-09T05:30:09.553578678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:09.554649 containerd[1581]: time="2025-09-09T05:30:09.554576952Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 05:30:09.556139 containerd[1581]: time="2025-09-09T05:30:09.556089577Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:09.559381 containerd[1581]: time="2025-09-09T05:30:09.559212685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:09.560554 containerd[1581]: time="2025-09-09T05:30:09.560510158Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.487178897s" Sep 9 05:30:09.560620 containerd[1581]: time="2025-09-09T05:30:09.560560580Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 05:30:12.438090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:30:12.438349 systemd[1]: kubelet.service: Consumed 271ms CPU time, 110.7M memory peak. Sep 9 05:30:12.440867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:30:12.473663 systemd[1]: Reload requested from client PID 2309 ('systemctl') (unit session-7.scope)... Sep 9 05:30:12.473683 systemd[1]: Reloading... Sep 9 05:30:12.588607 zram_generator::config[2349]: No configuration found. Sep 9 05:30:13.118089 systemd[1]: Reloading finished in 643 ms. Sep 9 05:30:13.177823 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:30:13.177995 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:30:13.178466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:30:13.178557 systemd[1]: kubelet.service: Consumed 178ms CPU time, 98.4M memory peak. Sep 9 05:30:13.181202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:30:13.430404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:30:13.451067 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:30:13.497408 kubelet[2400]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:30:13.497408 kubelet[2400]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:30:13.497408 kubelet[2400]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:30:13.497408 kubelet[2400]: I0909 05:30:13.496960 2400 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:30:13.611470 update_engine[1562]: I20250909 05:30:13.611336 1562 update_attempter.cc:509] Updating boot flags... Sep 9 05:30:14.458092 kubelet[2400]: I0909 05:30:14.457992 2400 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:30:14.458092 kubelet[2400]: I0909 05:30:14.458030 2400 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:30:14.458468 kubelet[2400]: I0909 05:30:14.458429 2400 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:30:14.546239 kubelet[2400]: E0909 05:30:14.546162 2400 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:14.547359 kubelet[2400]: I0909 05:30:14.547291 2400 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:30:14.938374 kubelet[2400]: I0909 05:30:14.938321 2400 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:30:14.950906 kubelet[2400]: I0909 05:30:14.949804 2400 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:30:14.955171 kubelet[2400]: I0909 05:30:14.955023 2400 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:30:14.955531 kubelet[2400]: I0909 05:30:14.955159 2400 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:30:14.955703 kubelet[2400]: I0909 05:30:14.955577 2400 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:30:14.955703 kubelet[2400]: I0909 05:30:14.955597 2400 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:30:14.955916 kubelet[2400]: I0909 05:30:14.955873 2400 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:30:14.962591 kubelet[2400]: I0909 05:30:14.962135 2400 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:30:14.967595 kubelet[2400]: I0909 05:30:14.966237 2400 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:30:14.967595 kubelet[2400]: I0909 05:30:14.966312 2400 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:30:14.967595 kubelet[2400]: I0909 05:30:14.966336 2400 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:30:14.972569 kubelet[2400]: W0909 05:30:14.970289 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 9 05:30:14.972569 kubelet[2400]: E0909 05:30:14.970358 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:14.972569 kubelet[2400]: W0909 05:30:14.970639 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 9 05:30:14.972569 kubelet[2400]: E0909 05:30:14.970671 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:14.975953 kubelet[2400]: I0909 05:30:14.975911 2400 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:30:14.976470 kubelet[2400]: I0909 05:30:14.976441 2400 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:30:14.979586 kubelet[2400]: W0909 05:30:14.978007 2400 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:30:14.982502 kubelet[2400]: I0909 05:30:14.982459 2400 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:30:14.982599 kubelet[2400]: I0909 05:30:14.982515 2400 server.go:1287] "Started kubelet" Sep 9 05:30:14.985453 kubelet[2400]: I0909 05:30:14.985418 2400 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:30:14.987566 kubelet[2400]: I0909 05:30:14.985738 2400 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:30:14.987566 kubelet[2400]: I0909 05:30:14.986021 2400 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:30:14.987566 kubelet[2400]: I0909 05:30:14.986390 2400 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:30:14.987566 kubelet[2400]: I0909 05:30:14.986651 2400 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:30:14.990326 kubelet[2400]: I0909 05:30:14.990281 2400 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:30:14.995747 kubelet[2400]: E0909 05:30:14.995707 2400 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:30:14.995747 kubelet[2400]: I0909 05:30:14.995753 2400 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:30:14.995947 kubelet[2400]: I0909 05:30:14.995938 2400 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:30:14.996241 kubelet[2400]: I0909 05:30:14.995993 2400 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:30:14.997139 kubelet[2400]: I0909 05:30:14.997115 2400 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:30:14.997240 kubelet[2400]: I0909 05:30:14.997217 2400 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:30:14.998125 kubelet[2400]: E0909 05:30:14.994356 2400 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863862ec1ca1e11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:30:14.982491665 +0000 UTC m=+1.527163126,LastTimestamp:2025-09-09 05:30:14.982491665 +0000 UTC m=+1.527163126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:30:15.001502 kubelet[2400]: E0909 05:30:15.001431 2400 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:30:15.001831 kubelet[2400]: E0909 05:30:15.001791 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Sep 9 05:30:15.001956 kubelet[2400]: I0909 05:30:15.001934 2400 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:30:15.004707 kubelet[2400]: W0909 05:30:15.004651 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 9 05:30:15.004707 kubelet[2400]: E0909 05:30:15.004707 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:15.019673 kubelet[2400]: I0909 05:30:15.019611 2400 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:30:15.019673 kubelet[2400]: I0909 05:30:15.019638 2400 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:30:15.019673 kubelet[2400]: I0909 05:30:15.019658 2400 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:30:15.030878 kubelet[2400]: I0909 05:30:15.030825 2400 policy_none.go:49] "None policy: Start" Sep 9 05:30:15.030878 kubelet[2400]: I0909 05:30:15.030880 2400 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:30:15.031058 kubelet[2400]: I0909 05:30:15.030901 2400 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:30:15.038902 kubelet[2400]: I0909 05:30:15.038842 2400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:30:15.040723 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:30:15.041338 kubelet[2400]: I0909 05:30:15.040976 2400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:30:15.041338 kubelet[2400]: I0909 05:30:15.041153 2400 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:30:15.041338 kubelet[2400]: I0909 05:30:15.041190 2400 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:30:15.041338 kubelet[2400]: I0909 05:30:15.041204 2400 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:30:15.041338 kubelet[2400]: E0909 05:30:15.041269 2400 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:30:15.045618 kubelet[2400]: W0909 05:30:15.045194 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 9 05:30:15.045618 kubelet[2400]: E0909 05:30:15.045259 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:15.051704 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:30:15.056591 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:30:15.066303 kubelet[2400]: I0909 05:30:15.066233 2400 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:30:15.066588 kubelet[2400]: I0909 05:30:15.066565 2400 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:30:15.066670 kubelet[2400]: I0909 05:30:15.066584 2400 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:30:15.066917 kubelet[2400]: I0909 05:30:15.066892 2400 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:30:15.068329 kubelet[2400]: E0909 05:30:15.068284 2400 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:30:15.068393 kubelet[2400]: E0909 05:30:15.068369 2400 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 05:30:15.151146 systemd[1]: Created slice kubepods-burstable-podf6be1e185195ef87ac36b9479e449641.slice - libcontainer container kubepods-burstable-podf6be1e185195ef87ac36b9479e449641.slice. Sep 9 05:30:15.167119 kubelet[2400]: E0909 05:30:15.167049 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:15.168625 kubelet[2400]: I0909 05:30:15.168592 2400 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:30:15.169249 kubelet[2400]: E0909 05:30:15.169202 2400 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 9 05:30:15.171863 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 05:30:15.187777 kubelet[2400]: E0909 05:30:15.187724 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:15.191628 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 05:30:15.196094 kubelet[2400]: E0909 05:30:15.196060 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:15.202906 kubelet[2400]: E0909 05:30:15.202868 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Sep 9 05:30:15.297489 kubelet[2400]: I0909 05:30:15.297388 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6be1e185195ef87ac36b9479e449641-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6be1e185195ef87ac36b9479e449641\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:15.297489 kubelet[2400]: I0909 05:30:15.297469 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:15.297489 kubelet[2400]: I0909 05:30:15.297501 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:15.297810 kubelet[2400]: I0909 05:30:15.297526 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:15.297810 kubelet[2400]: I0909 05:30:15.297590 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:15.297810 kubelet[2400]: I0909 05:30:15.297617 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6be1e185195ef87ac36b9479e449641-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6be1e185195ef87ac36b9479e449641\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:15.297810 kubelet[2400]: I0909 05:30:15.297641 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6be1e185195ef87ac36b9479e449641-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6be1e185195ef87ac36b9479e449641\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:15.297810 kubelet[2400]: I0909 05:30:15.297664 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:15.297934 kubelet[2400]: I0909 05:30:15.297689 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:15.371492 kubelet[2400]: I0909 05:30:15.371447 2400 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:30:15.372009 kubelet[2400]: E0909 05:30:15.371948 2400 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 9 05:30:15.468133 kubelet[2400]: E0909 05:30:15.467956 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:15.468939 containerd[1581]: time="2025-09-09T05:30:15.468824340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6be1e185195ef87ac36b9479e449641,Namespace:kube-system,Attempt:0,}" Sep 9 05:30:15.489128 kubelet[2400]: E0909 05:30:15.489057 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:15.489858 containerd[1581]: time="2025-09-09T05:30:15.489804274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 05:30:15.497329 kubelet[2400]: E0909 05:30:15.497261 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:15.497995 containerd[1581]: time="2025-09-09T05:30:15.497936137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 05:30:15.565799 containerd[1581]: time="2025-09-09T05:30:15.565673528Z" level=info msg="connecting to shim 3f12090ed8d30b427084300b12fe1c0cb8e15389e8120d1f842db3770407b10f" address="unix:///run/containerd/s/5bc6cf790de02a31e92731b935dcb26f809754dd55bddcaec1cc42ca6abb6df8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:30:15.571587 containerd[1581]: time="2025-09-09T05:30:15.571474437Z" level=info msg="connecting to shim 9f90c02152c5c4832805f8dde040b0d2285e4f95923bfa0a1fea08d97f7e930e" address="unix:///run/containerd/s/f138967e9b1c3dbe8d2d6b828493f4c81bf5a9ece4b98a9e4f75b56f61fd0517" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:30:15.583431 containerd[1581]: time="2025-09-09T05:30:15.583317514Z" level=info msg="connecting to shim 05b5f20e9f6e8fdc0f2315528a77570f7d85ba565c1224d72dd957a494055ff5" address="unix:///run/containerd/s/6bcfa4eab8761d3816931103df4676b9f88063969715946bb5462f5119463d33" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:30:15.603801 kubelet[2400]: E0909 05:30:15.603744 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Sep 9 05:30:15.664866 systemd[1]: Started cri-containerd-05b5f20e9f6e8fdc0f2315528a77570f7d85ba565c1224d72dd957a494055ff5.scope - libcontainer container 05b5f20e9f6e8fdc0f2315528a77570f7d85ba565c1224d72dd957a494055ff5. Sep 9 05:30:15.671031 systemd[1]: Started cri-containerd-3f12090ed8d30b427084300b12fe1c0cb8e15389e8120d1f842db3770407b10f.scope - libcontainer container 3f12090ed8d30b427084300b12fe1c0cb8e15389e8120d1f842db3770407b10f. Sep 9 05:30:15.673368 systemd[1]: Started cri-containerd-9f90c02152c5c4832805f8dde040b0d2285e4f95923bfa0a1fea08d97f7e930e.scope - libcontainer container 9f90c02152c5c4832805f8dde040b0d2285e4f95923bfa0a1fea08d97f7e930e. Sep 9 05:30:15.739091 containerd[1581]: time="2025-09-09T05:30:15.738594234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"05b5f20e9f6e8fdc0f2315528a77570f7d85ba565c1224d72dd957a494055ff5\"" Sep 9 05:30:15.741174 kubelet[2400]: E0909 05:30:15.741123 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:15.745803 containerd[1581]: time="2025-09-09T05:30:15.745158588Z" level=info msg="CreateContainer within sandbox \"05b5f20e9f6e8fdc0f2315528a77570f7d85ba565c1224d72dd957a494055ff5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:30:15.764164 containerd[1581]: time="2025-09-09T05:30:15.764102132Z" level=info msg="Container dfa0d82dcefbc9621df2347ff4d38cf13bc9ff69885ff0b42bd925c0193dea97: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:15.764522 containerd[1581]: time="2025-09-09T05:30:15.764153834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f12090ed8d30b427084300b12fe1c0cb8e15389e8120d1f842db3770407b10f\"" Sep 9 05:30:15.764883 containerd[1581]: time="2025-09-09T05:30:15.764855336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f6be1e185195ef87ac36b9479e449641,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f90c02152c5c4832805f8dde040b0d2285e4f95923bfa0a1fea08d97f7e930e\"" Sep 9 05:30:15.765677 kubelet[2400]: E0909 05:30:15.765646 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:15.766288 kubelet[2400]: E0909 05:30:15.766249 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:15.769135 containerd[1581]: time="2025-09-09T05:30:15.769093895Z" level=info msg="CreateContainer within sandbox \"3f12090ed8d30b427084300b12fe1c0cb8e15389e8120d1f842db3770407b10f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:30:15.769687 containerd[1581]: time="2025-09-09T05:30:15.769628827Z" level=info msg="CreateContainer within sandbox \"9f90c02152c5c4832805f8dde040b0d2285e4f95923bfa0a1fea08d97f7e930e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:30:15.774023 kubelet[2400]: I0909 05:30:15.773807 2400 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:30:15.774418 kubelet[2400]: E0909 05:30:15.774381 2400 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Sep 9 05:30:15.785994 containerd[1581]: time="2025-09-09T05:30:15.785928075Z" level=info msg="CreateContainer within sandbox \"05b5f20e9f6e8fdc0f2315528a77570f7d85ba565c1224d72dd957a494055ff5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dfa0d82dcefbc9621df2347ff4d38cf13bc9ff69885ff0b42bd925c0193dea97\"" Sep 9 05:30:15.786792 containerd[1581]: time="2025-09-09T05:30:15.786766047Z" level=info msg="StartContainer for \"dfa0d82dcefbc9621df2347ff4d38cf13bc9ff69885ff0b42bd925c0193dea97\"" Sep 9 05:30:15.788170 containerd[1581]: time="2025-09-09T05:30:15.788128470Z" level=info msg="connecting to shim dfa0d82dcefbc9621df2347ff4d38cf13bc9ff69885ff0b42bd925c0193dea97" address="unix:///run/containerd/s/6bcfa4eab8761d3816931103df4676b9f88063969715946bb5462f5119463d33" protocol=ttrpc version=3 Sep 9 05:30:15.792306 containerd[1581]: time="2025-09-09T05:30:15.792242082Z" level=info msg="Container 5962290a8d7ed8364c29503f7421a4d7c80f58cab27cfb96356ad503bc974ecd: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:15.796564 containerd[1581]: time="2025-09-09T05:30:15.795759901Z" level=info msg="Container 9238c6004cbb9e84223d4ba88b531051574b3e71e9e2ed0e5e3ebf59ae10516f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:15.803181 containerd[1581]: time="2025-09-09T05:30:15.803131236Z" level=info msg="CreateContainer within sandbox \"3f12090ed8d30b427084300b12fe1c0cb8e15389e8120d1f842db3770407b10f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5962290a8d7ed8364c29503f7421a4d7c80f58cab27cfb96356ad503bc974ecd\"" Sep 9 05:30:15.804066 containerd[1581]: time="2025-09-09T05:30:15.804019057Z" level=info msg="StartContainer for \"5962290a8d7ed8364c29503f7421a4d7c80f58cab27cfb96356ad503bc974ecd\"" Sep 9 05:30:15.805148 containerd[1581]: time="2025-09-09T05:30:15.805120172Z" level=info msg="connecting to shim 5962290a8d7ed8364c29503f7421a4d7c80f58cab27cfb96356ad503bc974ecd" address="unix:///run/containerd/s/5bc6cf790de02a31e92731b935dcb26f809754dd55bddcaec1cc42ca6abb6df8" protocol=ttrpc version=3 Sep 9 05:30:15.808738 containerd[1581]: time="2025-09-09T05:30:15.808697910Z" level=info msg="CreateContainer within sandbox \"9f90c02152c5c4832805f8dde040b0d2285e4f95923bfa0a1fea08d97f7e930e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9238c6004cbb9e84223d4ba88b531051574b3e71e9e2ed0e5e3ebf59ae10516f\"" Sep 9 05:30:15.809608 containerd[1581]: time="2025-09-09T05:30:15.809562686Z" level=info msg="StartContainer for \"9238c6004cbb9e84223d4ba88b531051574b3e71e9e2ed0e5e3ebf59ae10516f\"" Sep 9 05:30:15.810518 containerd[1581]: time="2025-09-09T05:30:15.810476459Z" level=info msg="connecting to shim 9238c6004cbb9e84223d4ba88b531051574b3e71e9e2ed0e5e3ebf59ae10516f" address="unix:///run/containerd/s/f138967e9b1c3dbe8d2d6b828493f4c81bf5a9ece4b98a9e4f75b56f61fd0517" protocol=ttrpc version=3 Sep 9 05:30:15.815834 systemd[1]: Started cri-containerd-dfa0d82dcefbc9621df2347ff4d38cf13bc9ff69885ff0b42bd925c0193dea97.scope - libcontainer container dfa0d82dcefbc9621df2347ff4d38cf13bc9ff69885ff0b42bd925c0193dea97. Sep 9 05:30:15.838885 systemd[1]: Started cri-containerd-5962290a8d7ed8364c29503f7421a4d7c80f58cab27cfb96356ad503bc974ecd.scope - libcontainer container 5962290a8d7ed8364c29503f7421a4d7c80f58cab27cfb96356ad503bc974ecd. Sep 9 05:30:15.849863 systemd[1]: Started cri-containerd-9238c6004cbb9e84223d4ba88b531051574b3e71e9e2ed0e5e3ebf59ae10516f.scope - libcontainer container 9238c6004cbb9e84223d4ba88b531051574b3e71e9e2ed0e5e3ebf59ae10516f. Sep 9 05:30:16.295837 kubelet[2400]: W0909 05:30:16.289684 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 9 05:30:16.295837 kubelet[2400]: E0909 05:30:16.289758 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:16.295837 kubelet[2400]: W0909 05:30:16.290036 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 9 05:30:16.295837 kubelet[2400]: E0909 05:30:16.290083 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:16.295837 kubelet[2400]: W0909 05:30:16.290156 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Sep 9 05:30:16.295837 kubelet[2400]: E0909 05:30:16.290201 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:30:16.325563 containerd[1581]: time="2025-09-09T05:30:16.325436274Z" level=info msg="StartContainer for \"dfa0d82dcefbc9621df2347ff4d38cf13bc9ff69885ff0b42bd925c0193dea97\" returns successfully" Sep 9 05:30:16.351977 containerd[1581]: time="2025-09-09T05:30:16.351832694Z" level=info msg="StartContainer for \"5962290a8d7ed8364c29503f7421a4d7c80f58cab27cfb96356ad503bc974ecd\" returns successfully" Sep 9 05:30:16.405064 kubelet[2400]: E0909 05:30:16.404988 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Sep 9 05:30:16.577140 kubelet[2400]: I0909 05:30:16.576855 2400 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:30:16.609722 containerd[1581]: time="2025-09-09T05:30:16.609345601Z" level=info msg="StartContainer for \"9238c6004cbb9e84223d4ba88b531051574b3e71e9e2ed0e5e3ebf59ae10516f\" returns successfully" Sep 9 05:30:17.305443 kubelet[2400]: E0909 05:30:17.305310 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:17.305890 kubelet[2400]: E0909 05:30:17.305481 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:17.308346 kubelet[2400]: E0909 05:30:17.308304 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:17.308499 kubelet[2400]: E0909 05:30:17.308474 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:17.309533 kubelet[2400]: E0909 05:30:17.309512 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:17.309665 kubelet[2400]: E0909 05:30:17.309650 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:18.254076 kubelet[2400]: I0909 05:30:18.254000 2400 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:30:18.254076 kubelet[2400]: E0909 05:30:18.254057 2400 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 05:30:18.311784 kubelet[2400]: E0909 05:30:18.311749 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:18.312393 kubelet[2400]: E0909 05:30:18.311973 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:18.312393 kubelet[2400]: E0909 05:30:18.312231 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 05:30:18.312393 kubelet[2400]: E0909 05:30:18.312321 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:18.312393 kubelet[2400]: E0909 05:30:18.312322 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:18.312393 kubelet[2400]: E0909 05:30:18.312384 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:18.522939 kubelet[2400]: E0909 05:30:18.522853 2400 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:30:18.624052 kubelet[2400]: E0909 05:30:18.623977 2400 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:30:18.699628 kubelet[2400]: I0909 05:30:18.699566 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:18.708560 kubelet[2400]: E0909 05:30:18.708473 2400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:18.708560 kubelet[2400]: I0909 05:30:18.708508 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:18.710094 kubelet[2400]: E0909 05:30:18.710057 2400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:18.710094 kubelet[2400]: I0909 05:30:18.710077 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:18.711698 kubelet[2400]: E0909 05:30:18.711658 2400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:19.291342 kubelet[2400]: I0909 05:30:19.291286 2400 apiserver.go:52] "Watching apiserver" Sep 9 05:30:19.296817 kubelet[2400]: I0909 05:30:19.296776 2400 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:30:19.312468 kubelet[2400]: I0909 05:30:19.312419 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:19.312468 kubelet[2400]: I0909 05:30:19.312465 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:19.321751 kubelet[2400]: E0909 05:30:19.321690 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:19.321948 kubelet[2400]: E0909 05:30:19.321920 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:20.314007 kubelet[2400]: E0909 05:30:20.313947 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:20.314526 kubelet[2400]: I0909 05:30:20.314161 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:20.560816 kubelet[2400]: E0909 05:30:20.560739 2400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:20.561000 kubelet[2400]: E0909 05:30:20.560985 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:21.031478 systemd[1]: Reload requested from client PID 2694 ('systemctl') (unit session-7.scope)... Sep 9 05:30:21.031501 systemd[1]: Reloading... Sep 9 05:30:21.156629 zram_generator::config[2740]: No configuration found. Sep 9 05:30:21.315822 kubelet[2400]: E0909 05:30:21.315606 2400 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:21.420527 systemd[1]: Reloading finished in 388 ms. Sep 9 05:30:21.455501 kubelet[2400]: I0909 05:30:21.455443 2400 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:30:21.455683 kubelet[2400]: E0909 05:30:21.455415 2400 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.1863862ec1ca1e11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:30:14.982491665 +0000 UTC m=+1.527163126,LastTimestamp:2025-09-09 05:30:14.982491665 +0000 UTC m=+1.527163126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:30:21.455622 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:30:21.470448 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:30:21.471639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:30:21.471717 systemd[1]: kubelet.service: Consumed 1.190s CPU time, 131.5M memory peak. Sep 9 05:30:21.475475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:30:21.775698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:30:21.791239 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:30:21.925062 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:30:21.925062 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:30:21.925062 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:30:21.925659 kubelet[2782]: I0909 05:30:21.925403 2782 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:30:21.934519 kubelet[2782]: I0909 05:30:21.934459 2782 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:30:21.934519 kubelet[2782]: I0909 05:30:21.934489 2782 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:30:21.934909 kubelet[2782]: I0909 05:30:21.934867 2782 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:30:21.936237 kubelet[2782]: I0909 05:30:21.936210 2782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:30:21.939185 kubelet[2782]: I0909 05:30:21.938841 2782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:30:21.943335 kubelet[2782]: I0909 05:30:21.943294 2782 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:30:21.950268 kubelet[2782]: I0909 05:30:21.950204 2782 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:30:21.950564 kubelet[2782]: I0909 05:30:21.950486 2782 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:30:21.950746 kubelet[2782]: I0909 05:30:21.950528 2782 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:30:21.950838 kubelet[2782]: I0909 05:30:21.950757 2782 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:30:21.950838 kubelet[2782]: I0909 05:30:21.950767 2782 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:30:21.950838 kubelet[2782]: I0909 05:30:21.950824 2782 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:30:21.951026 kubelet[2782]: I0909 05:30:21.951005 2782 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:30:21.951026 kubelet[2782]: I0909 05:30:21.951026 2782 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:30:21.951109 kubelet[2782]: I0909 05:30:21.951056 2782 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:30:21.951109 kubelet[2782]: I0909 05:30:21.951070 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:30:21.952051 kubelet[2782]: I0909 05:30:21.952012 2782 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:30:21.954299 kubelet[2782]: I0909 05:30:21.952386 2782 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:30:21.954299 kubelet[2782]: I0909 05:30:21.953053 2782 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:30:21.954299 kubelet[2782]: I0909 05:30:21.953080 2782 server.go:1287] "Started kubelet" Sep 9 05:30:21.954749 kubelet[2782]: I0909 05:30:21.954648 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:30:21.955272 kubelet[2782]: I0909 05:30:21.955245 2782 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:30:21.955350 kubelet[2782]: I0909 05:30:21.955319 2782 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:30:21.955869 kubelet[2782]: I0909 05:30:21.955831 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:30:21.956682 kubelet[2782]: I0909 05:30:21.956652 2782 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:30:21.961379 kubelet[2782]: I0909 05:30:21.961329 2782 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:30:22.181640 kubelet[2782]: I0909 05:30:22.179988 2782 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:30:22.181640 kubelet[2782]: E0909 05:30:22.181191 2782 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:30:22.184798 kubelet[2782]: I0909 05:30:22.184752 2782 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:30:22.185182 kubelet[2782]: I0909 05:30:22.185119 2782 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:30:22.201701 kubelet[2782]: I0909 05:30:22.191962 2782 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:30:22.201883 kubelet[2782]: I0909 05:30:22.201779 2782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:30:22.202951 kubelet[2782]: E0909 05:30:22.202523 2782 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:30:22.203088 kubelet[2782]: I0909 05:30:22.203066 2782 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:30:22.218400 kubelet[2782]: I0909 05:30:22.218316 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:30:22.246721 kubelet[2782]: I0909 05:30:22.243747 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:30:22.246721 kubelet[2782]: I0909 05:30:22.243850 2782 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:30:22.246721 kubelet[2782]: I0909 05:30:22.243904 2782 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:30:22.246721 kubelet[2782]: I0909 05:30:22.243915 2782 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:30:22.246721 kubelet[2782]: E0909 05:30:22.243996 2782 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:30:22.317989 kubelet[2782]: I0909 05:30:22.317922 2782 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:30:22.317989 kubelet[2782]: I0909 05:30:22.317959 2782 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:30:22.317989 kubelet[2782]: I0909 05:30:22.317986 2782 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:30:22.318283 kubelet[2782]: I0909 05:30:22.318258 2782 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:30:22.318322 kubelet[2782]: I0909 05:30:22.318275 2782 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:30:22.318322 kubelet[2782]: I0909 05:30:22.318300 2782 policy_none.go:49] "None policy: Start" Sep 9 05:30:22.318322 kubelet[2782]: I0909 05:30:22.318313 2782 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:30:22.318437 kubelet[2782]: I0909 05:30:22.318328 2782 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:30:22.318518 kubelet[2782]: I0909 05:30:22.318489 2782 state_mem.go:75] "Updated machine memory state" Sep 9 05:30:22.327645 kubelet[2782]: I0909 05:30:22.326741 2782 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:30:22.327645 kubelet[2782]: I0909 05:30:22.327060 2782 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:30:22.327645 kubelet[2782]: I0909 05:30:22.327079 2782 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:30:22.329384 kubelet[2782]: I0909 05:30:22.329160 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:30:22.333986 kubelet[2782]: E0909 05:30:22.333309 2782 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:30:22.345461 kubelet[2782]: I0909 05:30:22.345407 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:22.346300 kubelet[2782]: I0909 05:30:22.345860 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:22.346300 kubelet[2782]: I0909 05:30:22.346158 2782 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:22.356178 kubelet[2782]: E0909 05:30:22.356118 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:22.356370 kubelet[2782]: E0909 05:30:22.356239 2782 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:22.445863 kubelet[2782]: I0909 05:30:22.445699 2782 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 05:30:22.458999 kubelet[2782]: I0909 05:30:22.458935 2782 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 05:30:22.459226 kubelet[2782]: I0909 05:30:22.459097 2782 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 05:30:22.486684 kubelet[2782]: I0909 05:30:22.486578 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:22.486945 kubelet[2782]: I0909 05:30:22.486737 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:22.486945 kubelet[2782]: I0909 05:30:22.486818 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:22.486945 kubelet[2782]: I0909 05:30:22.486862 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6be1e185195ef87ac36b9479e449641-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6be1e185195ef87ac36b9479e449641\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:22.486945 kubelet[2782]: I0909 05:30:22.486891 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6be1e185195ef87ac36b9479e449641-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f6be1e185195ef87ac36b9479e449641\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:22.487131 kubelet[2782]: I0909 05:30:22.486933 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:22.487131 kubelet[2782]: I0909 05:30:22.487056 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:30:22.487131 kubelet[2782]: I0909 05:30:22.487075 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:30:22.487131 kubelet[2782]: I0909 05:30:22.487092 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6be1e185195ef87ac36b9479e449641-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f6be1e185195ef87ac36b9479e449641\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:30:22.696761 kubelet[2782]: E0909 05:30:22.696196 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:22.696761 kubelet[2782]: E0909 05:30:22.696229 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:22.697060 kubelet[2782]: E0909 05:30:22.696822 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:22.951775 kubelet[2782]: I0909 05:30:22.951594 2782 apiserver.go:52] "Watching apiserver" Sep 9 05:30:22.984984 kubelet[2782]: I0909 05:30:22.984894 2782 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:30:23.271642 kubelet[2782]: E0909 05:30:23.271569 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:23.272340 kubelet[2782]: E0909 05:30:23.271993 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:23.272402 kubelet[2782]: E0909 05:30:23.272382 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:23.310259 kubelet[2782]: I0909 05:30:23.310167 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.310136303 podStartE2EDuration="4.310136303s" podCreationTimestamp="2025-09-09 05:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:30:23.298151947 +0000 UTC m=+1.416364036" watchObservedRunningTime="2025-09-09 05:30:23.310136303 +0000 UTC m=+1.428348392" Sep 9 05:30:23.323410 kubelet[2782]: I0909 05:30:23.323236 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.323206517 podStartE2EDuration="4.323206517s" podCreationTimestamp="2025-09-09 05:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:30:23.310493563 +0000 UTC m=+1.428705652" watchObservedRunningTime="2025-09-09 05:30:23.323206517 +0000 UTC m=+1.441418606" Sep 9 05:30:23.375007 kubelet[2782]: I0909 05:30:23.374846 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.374801929 podStartE2EDuration="1.374801929s" podCreationTimestamp="2025-09-09 05:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:30:23.323622442 +0000 UTC m=+1.441834531" watchObservedRunningTime="2025-09-09 05:30:23.374801929 +0000 UTC m=+1.493014018" Sep 9 05:30:24.273507 kubelet[2782]: E0909 05:30:24.273454 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:24.274129 kubelet[2782]: E0909 05:30:24.273587 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:25.370314 kubelet[2782]: E0909 05:30:25.370261 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:26.111619 kubelet[2782]: I0909 05:30:26.111525 2782 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:30:26.112046 containerd[1581]: time="2025-09-09T05:30:26.111989520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:30:26.112488 kubelet[2782]: I0909 05:30:26.112249 2782 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:30:26.941199 systemd[1]: Created slice kubepods-besteffort-pod8b8ab47b_15ff_43a8_a148_051b52b2e40d.slice - libcontainer container kubepods-besteffort-pod8b8ab47b_15ff_43a8_a148_051b52b2e40d.slice. Sep 9 05:30:26.970179 systemd[1]: Created slice kubepods-besteffort-pod9b850bfd_c777_4bb8_a97b_191cd1a1aa87.slice - libcontainer container kubepods-besteffort-pod9b850bfd_c777_4bb8_a97b_191cd1a1aa87.slice. Sep 9 05:30:27.055098 kubelet[2782]: I0909 05:30:27.055038 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8ab47b-15ff-43a8-a148-051b52b2e40d-lib-modules\") pod \"kube-proxy-msdhn\" (UID: \"8b8ab47b-15ff-43a8-a148-051b52b2e40d\") " pod="kube-system/kube-proxy-msdhn" Sep 9 05:30:27.055098 kubelet[2782]: I0909 05:30:27.055080 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b8ab47b-15ff-43a8-a148-051b52b2e40d-kube-proxy\") pod \"kube-proxy-msdhn\" (UID: \"8b8ab47b-15ff-43a8-a148-051b52b2e40d\") " pod="kube-system/kube-proxy-msdhn" Sep 9 05:30:27.055098 kubelet[2782]: I0909 05:30:27.055103 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pht\" (UniqueName: \"kubernetes.io/projected/9b850bfd-c777-4bb8-a97b-191cd1a1aa87-kube-api-access-l8pht\") pod \"tigera-operator-755d956888-zd6xb\" (UID: \"9b850bfd-c777-4bb8-a97b-191cd1a1aa87\") " pod="tigera-operator/tigera-operator-755d956888-zd6xb" Sep 9 05:30:27.055643 kubelet[2782]: I0909 05:30:27.055126 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8ab47b-15ff-43a8-a148-051b52b2e40d-xtables-lock\") pod \"kube-proxy-msdhn\" (UID: \"8b8ab47b-15ff-43a8-a148-051b52b2e40d\") " pod="kube-system/kube-proxy-msdhn" Sep 9 05:30:27.055643 kubelet[2782]: I0909 05:30:27.055148 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzrp\" (UniqueName: \"kubernetes.io/projected/8b8ab47b-15ff-43a8-a148-051b52b2e40d-kube-api-access-jmzrp\") pod \"kube-proxy-msdhn\" (UID: \"8b8ab47b-15ff-43a8-a148-051b52b2e40d\") " pod="kube-system/kube-proxy-msdhn" Sep 9 05:30:27.055643 kubelet[2782]: I0909 05:30:27.055182 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b850bfd-c777-4bb8-a97b-191cd1a1aa87-var-lib-calico\") pod \"tigera-operator-755d956888-zd6xb\" (UID: \"9b850bfd-c777-4bb8-a97b-191cd1a1aa87\") " pod="tigera-operator/tigera-operator-755d956888-zd6xb" Sep 9 05:30:27.250019 kubelet[2782]: E0909 05:30:27.249880 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:27.250723 containerd[1581]: time="2025-09-09T05:30:27.250677485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-msdhn,Uid:8b8ab47b-15ff-43a8-a148-051b52b2e40d,Namespace:kube-system,Attempt:0,}" Sep 9 05:30:27.278072 containerd[1581]: time="2025-09-09T05:30:27.277738095Z" level=info msg="connecting to shim 663ddd9179aeecc6f2c5b18a62c791848fcf460b6158816b4c78419acd6ab5f5" address="unix:///run/containerd/s/662425537490d4a1feb0d69397a1d800dc271dfed2f4a100e1b0fb26066c96d3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:30:27.278387 containerd[1581]: time="2025-09-09T05:30:27.278355598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-zd6xb,Uid:9b850bfd-c777-4bb8-a97b-191cd1a1aa87,Namespace:tigera-operator,Attempt:0,}" Sep 9 05:30:27.306307 containerd[1581]: time="2025-09-09T05:30:27.306256638Z" level=info msg="connecting to shim 858f99a6a26366ee4296f652715ad55f95a8010baa04ec5e13c38e24562c6965" address="unix:///run/containerd/s/0c3a24a50fd6a29cf7c6570b66d7e18da559db1b8718b14fa2a0e61e70f49da8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:30:27.306755 systemd[1]: Started cri-containerd-663ddd9179aeecc6f2c5b18a62c791848fcf460b6158816b4c78419acd6ab5f5.scope - libcontainer container 663ddd9179aeecc6f2c5b18a62c791848fcf460b6158816b4c78419acd6ab5f5. Sep 9 05:30:27.338715 systemd[1]: Started cri-containerd-858f99a6a26366ee4296f652715ad55f95a8010baa04ec5e13c38e24562c6965.scope - libcontainer container 858f99a6a26366ee4296f652715ad55f95a8010baa04ec5e13c38e24562c6965. Sep 9 05:30:27.347395 containerd[1581]: time="2025-09-09T05:30:27.347349687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-msdhn,Uid:8b8ab47b-15ff-43a8-a148-051b52b2e40d,Namespace:kube-system,Attempt:0,} returns sandbox id \"663ddd9179aeecc6f2c5b18a62c791848fcf460b6158816b4c78419acd6ab5f5\"" Sep 9 05:30:27.348687 kubelet[2782]: E0909 05:30:27.348644 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:27.353039 containerd[1581]: time="2025-09-09T05:30:27.352979072Z" level=info msg="CreateContainer within sandbox \"663ddd9179aeecc6f2c5b18a62c791848fcf460b6158816b4c78419acd6ab5f5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:30:27.371742 containerd[1581]: time="2025-09-09T05:30:27.371673751Z" level=info msg="Container 0128144a9997c0aee387156af3ec7c50be5f09dfeda7477b6917ed7d0870f7ea: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:27.395872 containerd[1581]: time="2025-09-09T05:30:27.395750533Z" level=info msg="CreateContainer within sandbox \"663ddd9179aeecc6f2c5b18a62c791848fcf460b6158816b4c78419acd6ab5f5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0128144a9997c0aee387156af3ec7c50be5f09dfeda7477b6917ed7d0870f7ea\"" Sep 9 05:30:27.396634 containerd[1581]: time="2025-09-09T05:30:27.396586934Z" level=info msg="StartContainer for \"0128144a9997c0aee387156af3ec7c50be5f09dfeda7477b6917ed7d0870f7ea\"" Sep 9 05:30:27.398675 containerd[1581]: time="2025-09-09T05:30:27.398636610Z" level=info msg="connecting to shim 0128144a9997c0aee387156af3ec7c50be5f09dfeda7477b6917ed7d0870f7ea" address="unix:///run/containerd/s/662425537490d4a1feb0d69397a1d800dc271dfed2f4a100e1b0fb26066c96d3" protocol=ttrpc version=3 Sep 9 05:30:27.415744 containerd[1581]: time="2025-09-09T05:30:27.415679139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-zd6xb,Uid:9b850bfd-c777-4bb8-a97b-191cd1a1aa87,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"858f99a6a26366ee4296f652715ad55f95a8010baa04ec5e13c38e24562c6965\"" Sep 9 05:30:27.419777 containerd[1581]: time="2025-09-09T05:30:27.419733453Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 05:30:27.427755 systemd[1]: Started cri-containerd-0128144a9997c0aee387156af3ec7c50be5f09dfeda7477b6917ed7d0870f7ea.scope - libcontainer container 0128144a9997c0aee387156af3ec7c50be5f09dfeda7477b6917ed7d0870f7ea. Sep 9 05:30:27.573450 containerd[1581]: time="2025-09-09T05:30:27.573410568Z" level=info msg="StartContainer for \"0128144a9997c0aee387156af3ec7c50be5f09dfeda7477b6917ed7d0870f7ea\" returns successfully" Sep 9 05:30:28.282912 kubelet[2782]: E0909 05:30:28.282872 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:28.292351 kubelet[2782]: I0909 05:30:28.292247 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-msdhn" podStartSLOduration=2.292224676 podStartE2EDuration="2.292224676s" podCreationTimestamp="2025-09-09 05:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:30:28.291858172 +0000 UTC m=+6.410070261" watchObservedRunningTime="2025-09-09 05:30:28.292224676 +0000 UTC m=+6.410436765" Sep 9 05:30:28.590725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971610959.mount: Deactivated successfully. Sep 9 05:30:28.978768 containerd[1581]: time="2025-09-09T05:30:28.978624584Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:28.980880 containerd[1581]: time="2025-09-09T05:30:28.980840880Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 05:30:28.982441 containerd[1581]: time="2025-09-09T05:30:28.982401939Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:28.984807 containerd[1581]: time="2025-09-09T05:30:28.984751295Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:28.985217 containerd[1581]: time="2025-09-09T05:30:28.985179989Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.565406969s" Sep 9 05:30:28.985217 containerd[1581]: time="2025-09-09T05:30:28.985212954Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 05:30:28.987457 containerd[1581]: time="2025-09-09T05:30:28.987412848Z" level=info msg="CreateContainer within sandbox \"858f99a6a26366ee4296f652715ad55f95a8010baa04ec5e13c38e24562c6965\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 05:30:28.996920 containerd[1581]: time="2025-09-09T05:30:28.996869383Z" level=info msg="Container 4250a17de4e49b4dd61a2dc3f69ecdffef397ac1359d500f4efae57b08ef259b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:29.003477 containerd[1581]: time="2025-09-09T05:30:29.003415786Z" level=info msg="CreateContainer within sandbox \"858f99a6a26366ee4296f652715ad55f95a8010baa04ec5e13c38e24562c6965\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4250a17de4e49b4dd61a2dc3f69ecdffef397ac1359d500f4efae57b08ef259b\"" Sep 9 05:30:29.006418 containerd[1581]: time="2025-09-09T05:30:29.005164617Z" level=info msg="StartContainer for \"4250a17de4e49b4dd61a2dc3f69ecdffef397ac1359d500f4efae57b08ef259b\"" Sep 9 05:30:29.006418 containerd[1581]: time="2025-09-09T05:30:29.006147429Z" level=info msg="connecting to shim 4250a17de4e49b4dd61a2dc3f69ecdffef397ac1359d500f4efae57b08ef259b" address="unix:///run/containerd/s/0c3a24a50fd6a29cf7c6570b66d7e18da559db1b8718b14fa2a0e61e70f49da8" protocol=ttrpc version=3 Sep 9 05:30:29.062721 systemd[1]: Started cri-containerd-4250a17de4e49b4dd61a2dc3f69ecdffef397ac1359d500f4efae57b08ef259b.scope - libcontainer container 4250a17de4e49b4dd61a2dc3f69ecdffef397ac1359d500f4efae57b08ef259b. Sep 9 05:30:29.103720 containerd[1581]: time="2025-09-09T05:30:29.103504122Z" level=info msg="StartContainer for \"4250a17de4e49b4dd61a2dc3f69ecdffef397ac1359d500f4efae57b08ef259b\" returns successfully" Sep 9 05:30:31.056350 kubelet[2782]: E0909 05:30:31.056259 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:31.076516 kubelet[2782]: I0909 05:30:31.076426 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-zd6xb" podStartSLOduration=3.507753686 podStartE2EDuration="5.076402925s" podCreationTimestamp="2025-09-09 05:30:26 +0000 UTC" firstStartedPulling="2025-09-09 05:30:27.41733679 +0000 UTC m=+5.535548879" lastFinishedPulling="2025-09-09 05:30:28.985986029 +0000 UTC m=+7.104198118" observedRunningTime="2025-09-09 05:30:29.29649042 +0000 UTC m=+7.414702519" watchObservedRunningTime="2025-09-09 05:30:31.076402925 +0000 UTC m=+9.194615014" Sep 9 05:30:31.290207 kubelet[2782]: E0909 05:30:31.290155 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:32.240571 kubelet[2782]: E0909 05:30:32.240483 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:32.293058 kubelet[2782]: E0909 05:30:32.292762 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:33.293927 kubelet[2782]: E0909 05:30:33.293884 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:34.552643 sudo[1807]: pam_unix(sudo:session): session closed for user root Sep 9 05:30:34.558385 sshd[1806]: Connection closed by 10.0.0.1 port 34182 Sep 9 05:30:34.559479 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Sep 9 05:30:34.569421 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:34182.service: Deactivated successfully. Sep 9 05:30:34.574898 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:30:34.575155 systemd[1]: session-7.scope: Consumed 6.304s CPU time, 229.8M memory peak. Sep 9 05:30:34.579062 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:30:34.582888 systemd-logind[1560]: Removed session 7. Sep 9 05:30:35.378827 kubelet[2782]: E0909 05:30:35.378769 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:36.303961 kubelet[2782]: E0909 05:30:36.303405 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:37.742203 systemd[1]: Created slice kubepods-besteffort-pod6b63e025_540b_4aa9_8a2a_9d64701a8c31.slice - libcontainer container kubepods-besteffort-pod6b63e025_540b_4aa9_8a2a_9d64701a8c31.slice. Sep 9 05:30:37.826980 kubelet[2782]: I0909 05:30:37.826902 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6b63e025-540b-4aa9-8a2a-9d64701a8c31-typha-certs\") pod \"calico-typha-5cb9bb8464-hsgc9\" (UID: \"6b63e025-540b-4aa9-8a2a-9d64701a8c31\") " pod="calico-system/calico-typha-5cb9bb8464-hsgc9" Sep 9 05:30:37.826980 kubelet[2782]: I0909 05:30:37.826962 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr7fl\" (UniqueName: \"kubernetes.io/projected/6b63e025-540b-4aa9-8a2a-9d64701a8c31-kube-api-access-jr7fl\") pod \"calico-typha-5cb9bb8464-hsgc9\" (UID: \"6b63e025-540b-4aa9-8a2a-9d64701a8c31\") " pod="calico-system/calico-typha-5cb9bb8464-hsgc9" Sep 9 05:30:37.826980 kubelet[2782]: I0909 05:30:37.826996 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b63e025-540b-4aa9-8a2a-9d64701a8c31-tigera-ca-bundle\") pod \"calico-typha-5cb9bb8464-hsgc9\" (UID: \"6b63e025-540b-4aa9-8a2a-9d64701a8c31\") " pod="calico-system/calico-typha-5cb9bb8464-hsgc9" Sep 9 05:30:38.049051 kubelet[2782]: E0909 05:30:38.048992 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:38.049729 containerd[1581]: time="2025-09-09T05:30:38.049658661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb9bb8464-hsgc9,Uid:6b63e025-540b-4aa9-8a2a-9d64701a8c31,Namespace:calico-system,Attempt:0,}" Sep 9 05:30:38.420013 systemd[1]: Created slice kubepods-besteffort-pod5ab1c0aa_af9f_42a3_8f35_daa952bb9f26.slice - libcontainer container kubepods-besteffort-pod5ab1c0aa_af9f_42a3_8f35_daa952bb9f26.slice. Sep 9 05:30:38.433012 kubelet[2782]: I0909 05:30:38.432951 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-cni-net-dir\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.433012 kubelet[2782]: I0909 05:30:38.432996 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-node-certs\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.433012 kubelet[2782]: I0909 05:30:38.433018 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-tigera-ca-bundle\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.433012 kubelet[2782]: I0909 05:30:38.433036 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-flexvol-driver-host\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.434349 kubelet[2782]: I0909 05:30:38.434318 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-var-lib-calico\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.434349 kubelet[2782]: I0909 05:30:38.434347 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-cni-log-dir\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.434666 kubelet[2782]: I0909 05:30:38.434579 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-policysync\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.434779 containerd[1581]: time="2025-09-09T05:30:38.434723174Z" level=info msg="connecting to shim 2d7cb6f3f722f368386b64955ce467ae088ade96d03b75424ee74ad937258a3c" address="unix:///run/containerd/s/5c0780a4120de8a6377d2dbc8fb5827265acf8c62e5b28609bd7c7822704aca2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:30:38.435600 kubelet[2782]: I0909 05:30:38.434621 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-var-run-calico\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.435678 kubelet[2782]: I0909 05:30:38.435617 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-cni-bin-dir\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.435730 kubelet[2782]: I0909 05:30:38.435688 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-lib-modules\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.435767 kubelet[2782]: I0909 05:30:38.435736 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxww8\" (UniqueName: \"kubernetes.io/projected/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-kube-api-access-bxww8\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.435767 kubelet[2782]: I0909 05:30:38.435757 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ab1c0aa-af9f-42a3-8f35-daa952bb9f26-xtables-lock\") pod \"calico-node-qwms7\" (UID: \"5ab1c0aa-af9f-42a3-8f35-daa952bb9f26\") " pod="calico-system/calico-node-qwms7" Sep 9 05:30:38.489740 systemd[1]: Started cri-containerd-2d7cb6f3f722f368386b64955ce467ae088ade96d03b75424ee74ad937258a3c.scope - libcontainer container 2d7cb6f3f722f368386b64955ce467ae088ade96d03b75424ee74ad937258a3c. Sep 9 05:30:38.510587 kubelet[2782]: E0909 05:30:38.510145 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:38.536317 kubelet[2782]: I0909 05:30:38.536263 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8282d634-60a2-4800-bb3d-6e250193e296-kubelet-dir\") pod \"csi-node-driver-nh7hc\" (UID: \"8282d634-60a2-4800-bb3d-6e250193e296\") " pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:30:38.536317 kubelet[2782]: I0909 05:30:38.536311 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8282d634-60a2-4800-bb3d-6e250193e296-varrun\") pod \"csi-node-driver-nh7hc\" (UID: \"8282d634-60a2-4800-bb3d-6e250193e296\") " pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:30:38.536936 kubelet[2782]: I0909 05:30:38.536368 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tkt\" (UniqueName: \"kubernetes.io/projected/8282d634-60a2-4800-bb3d-6e250193e296-kube-api-access-82tkt\") pod \"csi-node-driver-nh7hc\" (UID: \"8282d634-60a2-4800-bb3d-6e250193e296\") " pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:30:38.536936 kubelet[2782]: I0909 05:30:38.536447 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8282d634-60a2-4800-bb3d-6e250193e296-registration-dir\") pod \"csi-node-driver-nh7hc\" (UID: \"8282d634-60a2-4800-bb3d-6e250193e296\") " pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:30:38.536936 kubelet[2782]: I0909 05:30:38.536497 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8282d634-60a2-4800-bb3d-6e250193e296-socket-dir\") pod \"csi-node-driver-nh7hc\" (UID: \"8282d634-60a2-4800-bb3d-6e250193e296\") " pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:30:38.541130 kubelet[2782]: E0909 05:30:38.541098 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.541130 kubelet[2782]: W0909 05:30:38.541126 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.541284 kubelet[2782]: E0909 05:30:38.541186 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.544740 kubelet[2782]: E0909 05:30:38.544157 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.544740 kubelet[2782]: W0909 05:30:38.544185 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.544740 kubelet[2782]: E0909 05:30:38.544587 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.544939 kubelet[2782]: E0909 05:30:38.544864 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.544939 kubelet[2782]: W0909 05:30:38.544875 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.544939 kubelet[2782]: E0909 05:30:38.544887 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.558067 kubelet[2782]: E0909 05:30:38.557804 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.558067 kubelet[2782]: W0909 05:30:38.557832 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.558067 kubelet[2782]: E0909 05:30:38.557859 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.608425 containerd[1581]: time="2025-09-09T05:30:38.608330992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb9bb8464-hsgc9,Uid:6b63e025-540b-4aa9-8a2a-9d64701a8c31,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d7cb6f3f722f368386b64955ce467ae088ade96d03b75424ee74ad937258a3c\"" Sep 9 05:30:38.609429 kubelet[2782]: E0909 05:30:38.609380 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:38.610680 containerd[1581]: time="2025-09-09T05:30:38.610642257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 05:30:38.637801 kubelet[2782]: E0909 05:30:38.637743 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.637801 kubelet[2782]: W0909 05:30:38.637772 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.637801 kubelet[2782]: E0909 05:30:38.637798 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.638177 kubelet[2782]: E0909 05:30:38.638154 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.638218 kubelet[2782]: W0909 05:30:38.638171 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.638218 kubelet[2782]: E0909 05:30:38.638215 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.638599 kubelet[2782]: E0909 05:30:38.638554 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.638599 kubelet[2782]: W0909 05:30:38.638565 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.638599 kubelet[2782]: E0909 05:30:38.638599 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.639140 kubelet[2782]: E0909 05:30:38.639087 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.639140 kubelet[2782]: W0909 05:30:38.639125 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.639347 kubelet[2782]: E0909 05:30:38.639206 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.639752 kubelet[2782]: E0909 05:30:38.639710 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.639752 kubelet[2782]: W0909 05:30:38.639730 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.639937 kubelet[2782]: E0909 05:30:38.639764 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.640024 kubelet[2782]: E0909 05:30:38.640006 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.640024 kubelet[2782]: W0909 05:30:38.640021 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.640126 kubelet[2782]: E0909 05:30:38.640116 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.640469 kubelet[2782]: E0909 05:30:38.640439 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.640469 kubelet[2782]: W0909 05:30:38.640460 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.640689 kubelet[2782]: E0909 05:30:38.640592 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.640836 kubelet[2782]: E0909 05:30:38.640805 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.640836 kubelet[2782]: W0909 05:30:38.640828 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.640907 kubelet[2782]: E0909 05:30:38.640891 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.641096 kubelet[2782]: E0909 05:30:38.641069 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.641096 kubelet[2782]: W0909 05:30:38.641088 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.641201 kubelet[2782]: E0909 05:30:38.641171 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.641412 kubelet[2782]: E0909 05:30:38.641380 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.641412 kubelet[2782]: W0909 05:30:38.641400 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.641671 kubelet[2782]: E0909 05:30:38.641527 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.641927 kubelet[2782]: E0909 05:30:38.641897 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.641927 kubelet[2782]: W0909 05:30:38.641912 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.642072 kubelet[2782]: E0909 05:30:38.641992 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.642155 kubelet[2782]: E0909 05:30:38.642119 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.642155 kubelet[2782]: W0909 05:30:38.642135 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.642259 kubelet[2782]: E0909 05:30:38.642225 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.642437 kubelet[2782]: E0909 05:30:38.642418 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.642437 kubelet[2782]: W0909 05:30:38.642433 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.642914 kubelet[2782]: E0909 05:30:38.642523 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.642914 kubelet[2782]: E0909 05:30:38.642782 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.642914 kubelet[2782]: W0909 05:30:38.642793 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.643007 kubelet[2782]: E0909 05:30:38.642869 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.643106 kubelet[2782]: E0909 05:30:38.643081 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.643106 kubelet[2782]: W0909 05:30:38.643095 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.643215 kubelet[2782]: E0909 05:30:38.643158 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.643685 kubelet[2782]: E0909 05:30:38.643655 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.643685 kubelet[2782]: W0909 05:30:38.643670 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.643799 kubelet[2782]: E0909 05:30:38.643716 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.643977 kubelet[2782]: E0909 05:30:38.643951 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.644049 kubelet[2782]: W0909 05:30:38.643975 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.644049 kubelet[2782]: E0909 05:30:38.644028 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.644283 kubelet[2782]: E0909 05:30:38.644261 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.644283 kubelet[2782]: W0909 05:30:38.644277 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.644400 kubelet[2782]: E0909 05:30:38.644365 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.644620 kubelet[2782]: E0909 05:30:38.644598 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.644620 kubelet[2782]: W0909 05:30:38.644614 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.644709 kubelet[2782]: E0909 05:30:38.644651 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.644888 kubelet[2782]: E0909 05:30:38.644865 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.644888 kubelet[2782]: W0909 05:30:38.644882 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.644955 kubelet[2782]: E0909 05:30:38.644931 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.645204 kubelet[2782]: E0909 05:30:38.645181 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.645204 kubelet[2782]: W0909 05:30:38.645197 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.645286 kubelet[2782]: E0909 05:30:38.645217 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.645584 kubelet[2782]: E0909 05:30:38.645528 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.645584 kubelet[2782]: W0909 05:30:38.645577 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.645672 kubelet[2782]: E0909 05:30:38.645599 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.646506 kubelet[2782]: E0909 05:30:38.645890 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.646506 kubelet[2782]: W0909 05:30:38.645907 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.646506 kubelet[2782]: E0909 05:30:38.645930 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.646506 kubelet[2782]: E0909 05:30:38.646199 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.646506 kubelet[2782]: W0909 05:30:38.646210 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.646506 kubelet[2782]: E0909 05:30:38.646238 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.646904 kubelet[2782]: E0909 05:30:38.646870 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.646904 kubelet[2782]: W0909 05:30:38.646885 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.646904 kubelet[2782]: E0909 05:30:38.646896 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.655067 kubelet[2782]: E0909 05:30:38.655027 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:38.655067 kubelet[2782]: W0909 05:30:38.655052 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:38.655067 kubelet[2782]: E0909 05:30:38.655073 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:38.725705 containerd[1581]: time="2025-09-09T05:30:38.725491294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qwms7,Uid:5ab1c0aa-af9f-42a3-8f35-daa952bb9f26,Namespace:calico-system,Attempt:0,}" Sep 9 05:30:38.752720 containerd[1581]: time="2025-09-09T05:30:38.752649392Z" level=info msg="connecting to shim 1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1" address="unix:///run/containerd/s/8218c1513f4456df77d61dbf19c5092b8d506d58c5cda89006511f9768713931" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:30:38.788864 systemd[1]: Started cri-containerd-1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1.scope - libcontainer container 1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1. Sep 9 05:30:39.135294 containerd[1581]: time="2025-09-09T05:30:39.135184461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qwms7,Uid:5ab1c0aa-af9f-42a3-8f35-daa952bb9f26,Namespace:calico-system,Attempt:0,} returns sandbox id \"1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1\"" Sep 9 05:30:40.244765 kubelet[2782]: E0909 05:30:40.244680 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:41.036741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309812617.mount: Deactivated successfully. Sep 9 05:30:42.245002 kubelet[2782]: E0909 05:30:42.244928 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:43.052128 containerd[1581]: time="2025-09-09T05:30:43.052059411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:43.052906 containerd[1581]: time="2025-09-09T05:30:43.052857775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 05:30:43.054176 containerd[1581]: time="2025-09-09T05:30:43.054112143Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:43.056395 containerd[1581]: time="2025-09-09T05:30:43.056345932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:43.056862 containerd[1581]: time="2025-09-09T05:30:43.056833949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.446156114s" Sep 9 05:30:43.056938 containerd[1581]: time="2025-09-09T05:30:43.056864748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 05:30:43.058449 containerd[1581]: time="2025-09-09T05:30:43.058239258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 05:30:43.071763 containerd[1581]: time="2025-09-09T05:30:43.071703272Z" level=info msg="CreateContainer within sandbox \"2d7cb6f3f722f368386b64955ce467ae088ade96d03b75424ee74ad937258a3c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 05:30:43.083100 containerd[1581]: time="2025-09-09T05:30:43.083042997Z" level=info msg="Container 9c387cda2a7f973306daaae9c48c556087532a146ae12c3580e453ba50286043: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:43.095572 containerd[1581]: time="2025-09-09T05:30:43.095508213Z" level=info msg="CreateContainer within sandbox \"2d7cb6f3f722f368386b64955ce467ae088ade96d03b75424ee74ad937258a3c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9c387cda2a7f973306daaae9c48c556087532a146ae12c3580e453ba50286043\"" Sep 9 05:30:43.096250 containerd[1581]: time="2025-09-09T05:30:43.096210181Z" level=info msg="StartContainer for \"9c387cda2a7f973306daaae9c48c556087532a146ae12c3580e453ba50286043\"" Sep 9 05:30:43.097438 containerd[1581]: time="2025-09-09T05:30:43.097395276Z" level=info msg="connecting to shim 9c387cda2a7f973306daaae9c48c556087532a146ae12c3580e453ba50286043" address="unix:///run/containerd/s/5c0780a4120de8a6377d2dbc8fb5827265acf8c62e5b28609bd7c7822704aca2" protocol=ttrpc version=3 Sep 9 05:30:43.120729 systemd[1]: Started cri-containerd-9c387cda2a7f973306daaae9c48c556087532a146ae12c3580e453ba50286043.scope - libcontainer container 9c387cda2a7f973306daaae9c48c556087532a146ae12c3580e453ba50286043. Sep 9 05:30:43.319664 containerd[1581]: time="2025-09-09T05:30:43.319483448Z" level=info msg="StartContainer for \"9c387cda2a7f973306daaae9c48c556087532a146ae12c3580e453ba50286043\" returns successfully" Sep 9 05:30:43.325980 kubelet[2782]: E0909 05:30:43.325571 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:43.362767 kubelet[2782]: E0909 05:30:43.362705 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.362767 kubelet[2782]: W0909 05:30:43.362741 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.362767 kubelet[2782]: E0909 05:30:43.362771 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.363450 kubelet[2782]: E0909 05:30:43.363404 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.363450 kubelet[2782]: W0909 05:30:43.363422 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.363450 kubelet[2782]: E0909 05:30:43.363433 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.365459 kubelet[2782]: E0909 05:30:43.364885 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.365459 kubelet[2782]: W0909 05:30:43.364896 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.365459 kubelet[2782]: E0909 05:30:43.364908 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.365459 kubelet[2782]: E0909 05:30:43.365210 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.365459 kubelet[2782]: W0909 05:30:43.365219 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.365459 kubelet[2782]: E0909 05:30:43.365232 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.365459 kubelet[2782]: E0909 05:30:43.365456 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.365754 kubelet[2782]: W0909 05:30:43.365467 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.365754 kubelet[2782]: E0909 05:30:43.365480 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.366554 kubelet[2782]: E0909 05:30:43.365945 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.366554 kubelet[2782]: W0909 05:30:43.365965 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.366554 kubelet[2782]: E0909 05:30:43.365977 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.367075 kubelet[2782]: E0909 05:30:43.366771 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.367075 kubelet[2782]: W0909 05:30:43.366787 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.367075 kubelet[2782]: E0909 05:30:43.366797 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.367386 kubelet[2782]: E0909 05:30:43.367370 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.367386 kubelet[2782]: W0909 05:30:43.367384 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.367457 kubelet[2782]: E0909 05:30:43.367395 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.367668 kubelet[2782]: E0909 05:30:43.367644 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.367668 kubelet[2782]: W0909 05:30:43.367660 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.367668 kubelet[2782]: E0909 05:30:43.367669 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.367893 kubelet[2782]: E0909 05:30:43.367873 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.367893 kubelet[2782]: W0909 05:30:43.367886 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.367972 kubelet[2782]: E0909 05:30:43.367895 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.368107 kubelet[2782]: E0909 05:30:43.368082 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.368107 kubelet[2782]: W0909 05:30:43.368098 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.368107 kubelet[2782]: E0909 05:30:43.368106 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.368638 kubelet[2782]: E0909 05:30:43.368613 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.368638 kubelet[2782]: W0909 05:30:43.368631 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.368733 kubelet[2782]: E0909 05:30:43.368641 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.368968 kubelet[2782]: E0909 05:30:43.368932 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.368968 kubelet[2782]: W0909 05:30:43.368948 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.368968 kubelet[2782]: E0909 05:30:43.368957 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.369249 kubelet[2782]: E0909 05:30:43.369221 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.369249 kubelet[2782]: W0909 05:30:43.369237 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.369249 kubelet[2782]: E0909 05:30:43.369247 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.369524 kubelet[2782]: E0909 05:30:43.369494 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.369524 kubelet[2782]: W0909 05:30:43.369515 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.369616 kubelet[2782]: E0909 05:30:43.369527 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.373715 kubelet[2782]: E0909 05:30:43.373661 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.373715 kubelet[2782]: W0909 05:30:43.373708 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.373801 kubelet[2782]: E0909 05:30:43.373736 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.374713 kubelet[2782]: E0909 05:30:43.374691 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.374713 kubelet[2782]: W0909 05:30:43.374709 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.374789 kubelet[2782]: E0909 05:30:43.374736 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.375092 kubelet[2782]: E0909 05:30:43.375059 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.375136 kubelet[2782]: W0909 05:30:43.375090 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.375168 kubelet[2782]: E0909 05:30:43.375138 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.375374 kubelet[2782]: E0909 05:30:43.375357 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.375374 kubelet[2782]: W0909 05:30:43.375369 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.375430 kubelet[2782]: E0909 05:30:43.375384 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.376453 kubelet[2782]: E0909 05:30:43.376172 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.376453 kubelet[2782]: W0909 05:30:43.376189 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.376453 kubelet[2782]: E0909 05:30:43.376237 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.376710 kubelet[2782]: E0909 05:30:43.376683 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.376710 kubelet[2782]: W0909 05:30:43.376700 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.376822 kubelet[2782]: E0909 05:30:43.376796 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.379643 kubelet[2782]: E0909 05:30:43.379614 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.379643 kubelet[2782]: W0909 05:30:43.379640 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.379828 kubelet[2782]: E0909 05:30:43.379813 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.379996 kubelet[2782]: E0909 05:30:43.379980 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.379996 kubelet[2782]: W0909 05:30:43.379993 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.380144 kubelet[2782]: E0909 05:30:43.380114 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.380310 kubelet[2782]: E0909 05:30:43.380290 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.380310 kubelet[2782]: W0909 05:30:43.380305 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.380471 kubelet[2782]: E0909 05:30:43.380432 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.380583 kubelet[2782]: E0909 05:30:43.380564 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.380613 kubelet[2782]: W0909 05:30:43.380583 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.380613 kubelet[2782]: E0909 05:30:43.380600 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.380869 kubelet[2782]: E0909 05:30:43.380851 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.380869 kubelet[2782]: W0909 05:30:43.380864 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.380933 kubelet[2782]: E0909 05:30:43.380882 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.382810 kubelet[2782]: E0909 05:30:43.382790 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.382810 kubelet[2782]: W0909 05:30:43.382806 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.383046 kubelet[2782]: E0909 05:30:43.383010 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.383379 kubelet[2782]: E0909 05:30:43.383348 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.383379 kubelet[2782]: W0909 05:30:43.383364 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.383494 kubelet[2782]: E0909 05:30:43.383470 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.383703 kubelet[2782]: E0909 05:30:43.383664 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.383703 kubelet[2782]: W0909 05:30:43.383684 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.383812 kubelet[2782]: E0909 05:30:43.383795 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.384289 kubelet[2782]: E0909 05:30:43.384269 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.384289 kubelet[2782]: W0909 05:30:43.384285 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.384361 kubelet[2782]: E0909 05:30:43.384317 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.385834 kubelet[2782]: E0909 05:30:43.385815 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.385834 kubelet[2782]: W0909 05:30:43.385829 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.385895 kubelet[2782]: E0909 05:30:43.385859 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.386222 kubelet[2782]: E0909 05:30:43.386203 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.386222 kubelet[2782]: W0909 05:30:43.386219 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.386293 kubelet[2782]: E0909 05:30:43.386236 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:43.387176 kubelet[2782]: E0909 05:30:43.387151 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:43.387176 kubelet[2782]: W0909 05:30:43.387168 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:43.387176 kubelet[2782]: E0909 05:30:43.387179 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.245561 kubelet[2782]: E0909 05:30:44.245456 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:44.327118 kubelet[2782]: I0909 05:30:44.327063 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:30:44.327720 kubelet[2782]: E0909 05:30:44.327479 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:44.376878 kubelet[2782]: E0909 05:30:44.376815 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.376878 kubelet[2782]: W0909 05:30:44.376854 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.376878 kubelet[2782]: E0909 05:30:44.376889 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.377219 kubelet[2782]: E0909 05:30:44.377188 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.377219 kubelet[2782]: W0909 05:30:44.377202 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.377219 kubelet[2782]: E0909 05:30:44.377213 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.377650 kubelet[2782]: E0909 05:30:44.377455 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.377650 kubelet[2782]: W0909 05:30:44.377472 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.377650 kubelet[2782]: E0909 05:30:44.377501 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.377876 kubelet[2782]: E0909 05:30:44.377840 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.377876 kubelet[2782]: W0909 05:30:44.377857 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.377876 kubelet[2782]: E0909 05:30:44.377870 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.378213 kubelet[2782]: E0909 05:30:44.378101 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.378213 kubelet[2782]: W0909 05:30:44.378112 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.378213 kubelet[2782]: E0909 05:30:44.378123 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.378339 kubelet[2782]: E0909 05:30:44.378318 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.378339 kubelet[2782]: W0909 05:30:44.378333 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.378435 kubelet[2782]: E0909 05:30:44.378343 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.378611 kubelet[2782]: E0909 05:30:44.378591 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.378611 kubelet[2782]: W0909 05:30:44.378607 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.378715 kubelet[2782]: E0909 05:30:44.378619 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.378869 kubelet[2782]: E0909 05:30:44.378850 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.378869 kubelet[2782]: W0909 05:30:44.378864 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.378952 kubelet[2782]: E0909 05:30:44.378875 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.379110 kubelet[2782]: E0909 05:30:44.379092 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.379110 kubelet[2782]: W0909 05:30:44.379107 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.379193 kubelet[2782]: E0909 05:30:44.379118 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.379318 kubelet[2782]: E0909 05:30:44.379300 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.379318 kubelet[2782]: W0909 05:30:44.379314 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.379389 kubelet[2782]: E0909 05:30:44.379324 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.379521 kubelet[2782]: E0909 05:30:44.379503 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.379521 kubelet[2782]: W0909 05:30:44.379517 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.379598 kubelet[2782]: E0909 05:30:44.379528 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.379768 kubelet[2782]: E0909 05:30:44.379750 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.379768 kubelet[2782]: W0909 05:30:44.379765 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.379843 kubelet[2782]: E0909 05:30:44.379775 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.380014 kubelet[2782]: E0909 05:30:44.379996 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.380014 kubelet[2782]: W0909 05:30:44.380010 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.380072 kubelet[2782]: E0909 05:30:44.380021 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.380262 kubelet[2782]: E0909 05:30:44.380224 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.380262 kubelet[2782]: W0909 05:30:44.380240 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.380262 kubelet[2782]: E0909 05:30:44.380251 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.380470 kubelet[2782]: E0909 05:30:44.380451 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.380470 kubelet[2782]: W0909 05:30:44.380464 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.380529 kubelet[2782]: E0909 05:30:44.380475 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.382660 kubelet[2782]: E0909 05:30:44.382624 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.382660 kubelet[2782]: W0909 05:30:44.382653 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.382781 kubelet[2782]: E0909 05:30:44.382667 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.382951 kubelet[2782]: E0909 05:30:44.382932 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.382951 kubelet[2782]: W0909 05:30:44.382946 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.383019 kubelet[2782]: E0909 05:30:44.382965 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.383272 kubelet[2782]: E0909 05:30:44.383233 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.383272 kubelet[2782]: W0909 05:30:44.383262 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.383341 kubelet[2782]: E0909 05:30:44.383294 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.383494 kubelet[2782]: E0909 05:30:44.383477 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.383494 kubelet[2782]: W0909 05:30:44.383490 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.383591 kubelet[2782]: E0909 05:30:44.383506 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.383737 kubelet[2782]: E0909 05:30:44.383714 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.383737 kubelet[2782]: W0909 05:30:44.383725 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.383809 kubelet[2782]: E0909 05:30:44.383742 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.384045 kubelet[2782]: E0909 05:30:44.384030 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.384045 kubelet[2782]: W0909 05:30:44.384044 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.384127 kubelet[2782]: E0909 05:30:44.384061 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.384343 kubelet[2782]: E0909 05:30:44.384322 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.384343 kubelet[2782]: W0909 05:30:44.384336 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.384413 kubelet[2782]: E0909 05:30:44.384351 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.384624 kubelet[2782]: E0909 05:30:44.384606 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.384624 kubelet[2782]: W0909 05:30:44.384619 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.384707 kubelet[2782]: E0909 05:30:44.384650 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.384829 kubelet[2782]: E0909 05:30:44.384809 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.384829 kubelet[2782]: W0909 05:30:44.384821 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.384903 kubelet[2782]: E0909 05:30:44.384847 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.384999 kubelet[2782]: E0909 05:30:44.384983 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.384999 kubelet[2782]: W0909 05:30:44.384993 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.385072 kubelet[2782]: E0909 05:30:44.385006 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.385195 kubelet[2782]: E0909 05:30:44.385179 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.385195 kubelet[2782]: W0909 05:30:44.385191 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.385270 kubelet[2782]: E0909 05:30:44.385206 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.385377 kubelet[2782]: E0909 05:30:44.385363 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.385377 kubelet[2782]: W0909 05:30:44.385373 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.385445 kubelet[2782]: E0909 05:30:44.385385 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.385586 kubelet[2782]: E0909 05:30:44.385573 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.385586 kubelet[2782]: W0909 05:30:44.385582 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.385650 kubelet[2782]: E0909 05:30:44.385594 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.385881 kubelet[2782]: E0909 05:30:44.385863 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.385881 kubelet[2782]: W0909 05:30:44.385878 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.385940 kubelet[2782]: E0909 05:30:44.385890 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.386093 kubelet[2782]: E0909 05:30:44.386077 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.386093 kubelet[2782]: W0909 05:30:44.386090 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.386160 kubelet[2782]: E0909 05:30:44.386101 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.386288 kubelet[2782]: E0909 05:30:44.386270 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.386288 kubelet[2782]: W0909 05:30:44.386284 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.386354 kubelet[2782]: E0909 05:30:44.386298 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.386529 kubelet[2782]: E0909 05:30:44.386514 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.386529 kubelet[2782]: W0909 05:30:44.386524 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.386618 kubelet[2782]: E0909 05:30:44.386533 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:44.386991 kubelet[2782]: E0909 05:30:44.386962 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:30:44.386991 kubelet[2782]: W0909 05:30:44.386973 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:30:44.386991 kubelet[2782]: E0909 05:30:44.386983 2782 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:30:45.821429 containerd[1581]: time="2025-09-09T05:30:45.821315144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:45.920076 containerd[1581]: time="2025-09-09T05:30:45.919998840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 05:30:46.091527 containerd[1581]: time="2025-09-09T05:30:46.091360128Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:46.171932 containerd[1581]: time="2025-09-09T05:30:46.171838811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:46.172776 containerd[1581]: time="2025-09-09T05:30:46.172739447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 3.114467055s" Sep 9 05:30:46.172873 containerd[1581]: time="2025-09-09T05:30:46.172774654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 05:30:46.174920 containerd[1581]: time="2025-09-09T05:30:46.174893354Z" level=info msg="CreateContainer within sandbox \"1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 05:30:46.191336 containerd[1581]: time="2025-09-09T05:30:46.191206953Z" level=info msg="Container b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:46.212858 containerd[1581]: time="2025-09-09T05:30:46.212781214Z" level=info msg="CreateContainer within sandbox \"1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2\"" Sep 9 05:30:46.214023 containerd[1581]: time="2025-09-09T05:30:46.213967306Z" level=info msg="StartContainer for \"b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2\"" Sep 9 05:30:46.216413 containerd[1581]: time="2025-09-09T05:30:46.216376844Z" level=info msg="connecting to shim b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2" address="unix:///run/containerd/s/8218c1513f4456df77d61dbf19c5092b8d506d58c5cda89006511f9768713931" protocol=ttrpc version=3 Sep 9 05:30:46.245584 kubelet[2782]: E0909 05:30:46.245074 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:46.253791 systemd[1]: Started cri-containerd-b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2.scope - libcontainer container b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2. Sep 9 05:30:46.330381 systemd[1]: cri-containerd-b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2.scope: Deactivated successfully. Sep 9 05:30:46.333181 containerd[1581]: time="2025-09-09T05:30:46.333139243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2\" id:\"b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2\" pid:3460 exited_at:{seconds:1757395846 nanos:331994369}" Sep 9 05:30:46.551586 containerd[1581]: time="2025-09-09T05:30:46.550889013Z" level=info msg="received exit event container_id:\"b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2\" id:\"b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2\" pid:3460 exited_at:{seconds:1757395846 nanos:331994369}" Sep 9 05:30:46.553412 containerd[1581]: time="2025-09-09T05:30:46.553377862Z" level=info msg="StartContainer for \"b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2\" returns successfully" Sep 9 05:30:46.587059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1b543445cf0ac93337fa351d8b26d3aff8fc56e9588751acea4c2749de6cae2-rootfs.mount: Deactivated successfully. Sep 9 05:30:47.560180 containerd[1581]: time="2025-09-09T05:30:47.560121585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 05:30:47.577135 kubelet[2782]: I0909 05:30:47.576532 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cb9bb8464-hsgc9" podStartSLOduration=6.128919045 podStartE2EDuration="10.576497374s" podCreationTimestamp="2025-09-09 05:30:37 +0000 UTC" firstStartedPulling="2025-09-09 05:30:38.610141963 +0000 UTC m=+16.728354052" lastFinishedPulling="2025-09-09 05:30:43.057720292 +0000 UTC m=+21.175932381" observedRunningTime="2025-09-09 05:30:43.342962985 +0000 UTC m=+21.461175074" watchObservedRunningTime="2025-09-09 05:30:47.576497374 +0000 UTC m=+25.694709463" Sep 9 05:30:48.244895 kubelet[2782]: E0909 05:30:48.244808 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:50.245160 kubelet[2782]: E0909 05:30:50.245058 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:50.424113 kubelet[2782]: I0909 05:30:50.424047 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:30:50.424594 kubelet[2782]: E0909 05:30:50.424525 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:50.564715 kubelet[2782]: E0909 05:30:50.564532 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:52.245490 kubelet[2782]: E0909 05:30:52.245401 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:53.922463 containerd[1581]: time="2025-09-09T05:30:53.922320969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:53.923358 containerd[1581]: time="2025-09-09T05:30:53.923321718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 05:30:53.926169 containerd[1581]: time="2025-09-09T05:30:53.926088547Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:53.928924 containerd[1581]: time="2025-09-09T05:30:53.928878271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:30:53.929570 containerd[1581]: time="2025-09-09T05:30:53.929467645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.369306903s" Sep 9 05:30:53.929570 containerd[1581]: time="2025-09-09T05:30:53.929501790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 05:30:53.932174 containerd[1581]: time="2025-09-09T05:30:53.932119415Z" level=info msg="CreateContainer within sandbox \"1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 05:30:53.949022 containerd[1581]: time="2025-09-09T05:30:53.948938979Z" level=info msg="Container 0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:53.967495 containerd[1581]: time="2025-09-09T05:30:53.967415164Z" level=info msg="CreateContainer within sandbox \"1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b\"" Sep 9 05:30:53.968486 containerd[1581]: time="2025-09-09T05:30:53.968432345Z" level=info msg="StartContainer for \"0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b\"" Sep 9 05:30:53.970830 containerd[1581]: time="2025-09-09T05:30:53.970733486Z" level=info msg="connecting to shim 0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b" address="unix:///run/containerd/s/8218c1513f4456df77d61dbf19c5092b8d506d58c5cda89006511f9768713931" protocol=ttrpc version=3 Sep 9 05:30:54.005891 systemd[1]: Started cri-containerd-0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b.scope - libcontainer container 0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b. Sep 9 05:30:54.245532 kubelet[2782]: E0909 05:30:54.245004 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:54.338080 containerd[1581]: time="2025-09-09T05:30:54.338013690Z" level=info msg="StartContainer for \"0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b\" returns successfully" Sep 9 05:30:55.887676 systemd[1]: cri-containerd-0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b.scope: Deactivated successfully. Sep 9 05:30:55.889223 containerd[1581]: time="2025-09-09T05:30:55.888321369Z" level=info msg="received exit event container_id:\"0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b\" id:\"0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b\" pid:3522 exited_at:{seconds:1757395855 nanos:887982173}" Sep 9 05:30:55.889870 systemd[1]: cri-containerd-0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b.scope: Consumed 764ms CPU time, 179.3M memory peak, 1.8M read from disk, 171.3M written to disk. Sep 9 05:30:55.891581 containerd[1581]: time="2025-09-09T05:30:55.890975588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b\" id:\"0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b\" pid:3522 exited_at:{seconds:1757395855 nanos:887982173}" Sep 9 05:30:55.916418 kubelet[2782]: I0909 05:30:55.916369 2782 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:30:55.928064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eff148c4ce0fa0568046ed09ecf635016e775648614ccf5b0aa04a54f53211b-rootfs.mount: Deactivated successfully. Sep 9 05:30:56.028064 systemd[1]: Created slice kubepods-besteffort-podac492e22_85ae_4713_8f79_3c6dddbfcd9d.slice - libcontainer container kubepods-besteffort-podac492e22_85ae_4713_8f79_3c6dddbfcd9d.slice. Sep 9 05:30:56.252420 kubelet[2782]: I0909 05:30:56.251941 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-ca-bundle\") pod \"whisker-dc4d8575b-zd2fs\" (UID: \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\") " pod="calico-system/whisker-dc4d8575b-zd2fs" Sep 9 05:30:56.252420 kubelet[2782]: I0909 05:30:56.252021 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4ghg\" (UniqueName: \"kubernetes.io/projected/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-kube-api-access-w4ghg\") pod \"whisker-dc4d8575b-zd2fs\" (UID: \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\") " pod="calico-system/whisker-dc4d8575b-zd2fs" Sep 9 05:30:56.252420 kubelet[2782]: I0909 05:30:56.252045 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-backend-key-pair\") pod \"whisker-dc4d8575b-zd2fs\" (UID: \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\") " pod="calico-system/whisker-dc4d8575b-zd2fs" Sep 9 05:30:56.269096 systemd[1]: Created slice kubepods-besteffort-pod8282d634_60a2_4800_bb3d_6e250193e296.slice - libcontainer container kubepods-besteffort-pod8282d634_60a2_4800_bb3d_6e250193e296.slice. Sep 9 05:30:56.273204 containerd[1581]: time="2025-09-09T05:30:56.273141629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nh7hc,Uid:8282d634-60a2-4800-bb3d-6e250193e296,Namespace:calico-system,Attempt:0,}" Sep 9 05:30:56.278315 systemd[1]: Created slice kubepods-besteffort-podbccf19d1_5313_4b91_9b56_838625d8205d.slice - libcontainer container kubepods-besteffort-podbccf19d1_5313_4b91_9b56_838625d8205d.slice. Sep 9 05:30:56.284655 systemd[1]: Created slice kubepods-burstable-podc163eb63_f9f8_4a04_b35c_215954baf990.slice - libcontainer container kubepods-burstable-podc163eb63_f9f8_4a04_b35c_215954baf990.slice. Sep 9 05:30:56.289647 systemd[1]: Created slice kubepods-besteffort-podf711d909_bcc2_489e_b152_66373d005f4b.slice - libcontainer container kubepods-besteffort-podf711d909_bcc2_489e_b152_66373d005f4b.slice. Sep 9 05:30:56.298359 systemd[1]: Created slice kubepods-besteffort-pod7e14dcbb_eeec_4f50_b6c8_5afcd301051c.slice - libcontainer container kubepods-besteffort-pod7e14dcbb_eeec_4f50_b6c8_5afcd301051c.slice. Sep 9 05:30:56.304364 systemd[1]: Created slice kubepods-besteffort-podb10d24f7_9257_4b62_a1a3_f25bb881f8eb.slice - libcontainer container kubepods-besteffort-podb10d24f7_9257_4b62_a1a3_f25bb881f8eb.slice. Sep 9 05:30:56.309728 systemd[1]: Created slice kubepods-besteffort-pod673b51d3_fcbe_4c6d_adf0_cd744f582691.slice - libcontainer container kubepods-besteffort-pod673b51d3_fcbe_4c6d_adf0_cd744f582691.slice. Sep 9 05:30:56.315063 systemd[1]: Created slice kubepods-burstable-podb7288b17_f0e8_4160_8e70_36c1a2ee883e.slice - libcontainer container kubepods-burstable-podb7288b17_f0e8_4160_8e70_36c1a2ee883e.slice. Sep 9 05:30:56.353515 kubelet[2782]: I0909 05:30:56.353407 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c163eb63-f9f8-4a04-b35c-215954baf990-config-volume\") pod \"coredns-668d6bf9bc-pr9cs\" (UID: \"c163eb63-f9f8-4a04-b35c-215954baf990\") " pod="kube-system/coredns-668d6bf9bc-pr9cs" Sep 9 05:30:56.353515 kubelet[2782]: I0909 05:30:56.353470 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk9cb\" (UniqueName: \"kubernetes.io/projected/f711d909-bcc2-489e-b152-66373d005f4b-kube-api-access-jk9cb\") pod \"goldmane-54d579b49d-7rchx\" (UID: \"f711d909-bcc2-489e-b152-66373d005f4b\") " pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:30:56.353515 kubelet[2782]: I0909 05:30:56.353498 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7sh\" (UniqueName: \"kubernetes.io/projected/c163eb63-f9f8-4a04-b35c-215954baf990-kube-api-access-lr7sh\") pod \"coredns-668d6bf9bc-pr9cs\" (UID: \"c163eb63-f9f8-4a04-b35c-215954baf990\") " pod="kube-system/coredns-668d6bf9bc-pr9cs" Sep 9 05:30:56.353515 kubelet[2782]: I0909 05:30:56.353522 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7288b17-f0e8-4160-8e70-36c1a2ee883e-config-volume\") pod \"coredns-668d6bf9bc-jggbv\" (UID: \"b7288b17-f0e8-4160-8e70-36c1a2ee883e\") " pod="kube-system/coredns-668d6bf9bc-jggbv" Sep 9 05:30:56.353515 kubelet[2782]: I0909 05:30:56.353563 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwx7\" (UniqueName: \"kubernetes.io/projected/b7288b17-f0e8-4160-8e70-36c1a2ee883e-kube-api-access-6bwx7\") pod \"coredns-668d6bf9bc-jggbv\" (UID: \"b7288b17-f0e8-4160-8e70-36c1a2ee883e\") " pod="kube-system/coredns-668d6bf9bc-jggbv" Sep 9 05:30:56.353887 kubelet[2782]: I0909 05:30:56.353581 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f711d909-bcc2-489e-b152-66373d005f4b-goldmane-key-pair\") pod \"goldmane-54d579b49d-7rchx\" (UID: \"f711d909-bcc2-489e-b152-66373d005f4b\") " pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:30:56.353887 kubelet[2782]: I0909 05:30:56.353599 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f711d909-bcc2-489e-b152-66373d005f4b-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-7rchx\" (UID: \"f711d909-bcc2-489e-b152-66373d005f4b\") " pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:30:56.353887 kubelet[2782]: I0909 05:30:56.353620 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/673b51d3-fcbe-4c6d-adf0-cd744f582691-calico-apiserver-certs\") pod \"calico-apiserver-687ff77b4f-znvzn\" (UID: \"673b51d3-fcbe-4c6d-adf0-cd744f582691\") " pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" Sep 9 05:30:56.353887 kubelet[2782]: I0909 05:30:56.353642 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f711d909-bcc2-489e-b152-66373d005f4b-config\") pod \"goldmane-54d579b49d-7rchx\" (UID: \"f711d909-bcc2-489e-b152-66373d005f4b\") " pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:30:56.353887 kubelet[2782]: I0909 05:30:56.353680 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x69wc\" (UniqueName: \"kubernetes.io/projected/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-kube-api-access-x69wc\") pod \"calico-apiserver-54d4f99bb7-lgdcj\" (UID: \"7e14dcbb-eeec-4f50-b6c8-5afcd301051c\") " pod="calico-apiserver/calico-apiserver-54d4f99bb7-lgdcj" Sep 9 05:30:56.354057 kubelet[2782]: I0909 05:30:56.353707 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-calico-apiserver-certs\") pod \"calico-apiserver-54d4f99bb7-f2pkg\" (UID: \"b10d24f7-9257-4b62-a1a3-f25bb881f8eb\") " pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" Sep 9 05:30:56.354057 kubelet[2782]: I0909 05:30:56.353768 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtqpt\" (UniqueName: \"kubernetes.io/projected/673b51d3-fcbe-4c6d-adf0-cd744f582691-kube-api-access-jtqpt\") pod \"calico-apiserver-687ff77b4f-znvzn\" (UID: \"673b51d3-fcbe-4c6d-adf0-cd744f582691\") " pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" Sep 9 05:30:56.354057 kubelet[2782]: I0909 05:30:56.353808 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thblt\" (UniqueName: \"kubernetes.io/projected/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-kube-api-access-thblt\") pod \"calico-apiserver-54d4f99bb7-f2pkg\" (UID: \"b10d24f7-9257-4b62-a1a3-f25bb881f8eb\") " pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" Sep 9 05:30:56.354057 kubelet[2782]: I0909 05:30:56.353833 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bccf19d1-5313-4b91-9b56-838625d8205d-tigera-ca-bundle\") pod \"calico-kube-controllers-b6b778d54-mgs42\" (UID: \"bccf19d1-5313-4b91-9b56-838625d8205d\") " pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" Sep 9 05:30:56.354057 kubelet[2782]: I0909 05:30:56.353854 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ght2\" (UniqueName: \"kubernetes.io/projected/bccf19d1-5313-4b91-9b56-838625d8205d-kube-api-access-2ght2\") pod \"calico-kube-controllers-b6b778d54-mgs42\" (UID: \"bccf19d1-5313-4b91-9b56-838625d8205d\") " pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" Sep 9 05:30:56.354210 kubelet[2782]: I0909 05:30:56.353875 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-calico-apiserver-certs\") pod \"calico-apiserver-54d4f99bb7-lgdcj\" (UID: \"7e14dcbb-eeec-4f50-b6c8-5afcd301051c\") " pod="calico-apiserver/calico-apiserver-54d4f99bb7-lgdcj" Sep 9 05:30:56.592641 kubelet[2782]: E0909 05:30:56.592585 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:56.593697 containerd[1581]: time="2025-09-09T05:30:56.593087463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pr9cs,Uid:c163eb63-f9f8-4a04-b35c-215954baf990,Namespace:kube-system,Attempt:0,}" Sep 9 05:30:56.595097 containerd[1581]: time="2025-09-09T05:30:56.594928651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7rchx,Uid:f711d909-bcc2-489e-b152-66373d005f4b,Namespace:calico-system,Attempt:0,}" Sep 9 05:30:56.601737 containerd[1581]: time="2025-09-09T05:30:56.601700823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-lgdcj,Uid:7e14dcbb-eeec-4f50-b6c8-5afcd301051c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:30:56.608254 containerd[1581]: time="2025-09-09T05:30:56.608226837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-f2pkg,Uid:b10d24f7-9257-4b62-a1a3-f25bb881f8eb,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:30:56.613844 containerd[1581]: time="2025-09-09T05:30:56.613783232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-znvzn,Uid:673b51d3-fcbe-4c6d-adf0-cd744f582691,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:30:56.618156 kubelet[2782]: E0909 05:30:56.618122 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:30:56.618436 containerd[1581]: time="2025-09-09T05:30:56.618407592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jggbv,Uid:b7288b17-f0e8-4160-8e70-36c1a2ee883e,Namespace:kube-system,Attempt:0,}" Sep 9 05:30:56.632971 containerd[1581]: time="2025-09-09T05:30:56.632874444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc4d8575b-zd2fs,Uid:ac492e22-85ae-4713-8f79-3c6dddbfcd9d,Namespace:calico-system,Attempt:0,}" Sep 9 05:30:56.884732 containerd[1581]: time="2025-09-09T05:30:56.884595348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b6b778d54-mgs42,Uid:bccf19d1-5313-4b91-9b56-838625d8205d,Namespace:calico-system,Attempt:0,}" Sep 9 05:30:57.070478 containerd[1581]: time="2025-09-09T05:30:57.070384397Z" level=error msg="Failed to destroy network for sandbox \"cdf23ba66fab7f063b4b9b622aace7b772703713a4906bd787357ccb3f86e7d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.073478 systemd[1]: run-netns-cni\x2dc98535ae\x2d9ee5\x2db8f4\x2dc4f8\x2dc2444b7befd7.mount: Deactivated successfully. Sep 9 05:30:57.484573 containerd[1581]: time="2025-09-09T05:30:57.483966674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nh7hc,Uid:8282d634-60a2-4800-bb3d-6e250193e296,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf23ba66fab7f063b4b9b622aace7b772703713a4906bd787357ccb3f86e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.493770 kubelet[2782]: E0909 05:30:57.493687 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf23ba66fab7f063b4b9b622aace7b772703713a4906bd787357ccb3f86e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.494288 kubelet[2782]: E0909 05:30:57.494263 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf23ba66fab7f063b4b9b622aace7b772703713a4906bd787357ccb3f86e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:30:57.494383 kubelet[2782]: E0909 05:30:57.494367 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf23ba66fab7f063b4b9b622aace7b772703713a4906bd787357ccb3f86e7d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:30:57.494789 kubelet[2782]: E0909 05:30:57.494490 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nh7hc_calico-system(8282d634-60a2-4800-bb3d-6e250193e296)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nh7hc_calico-system(8282d634-60a2-4800-bb3d-6e250193e296)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdf23ba66fab7f063b4b9b622aace7b772703713a4906bd787357ccb3f86e7d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:30:57.607697 containerd[1581]: time="2025-09-09T05:30:57.607455859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 05:30:57.628821 containerd[1581]: time="2025-09-09T05:30:57.628660078Z" level=error msg="Failed to destroy network for sandbox \"48c7c9563ee81c539b071bcd3c49e709ed2f9b28703d57a9a6a4550d52e06cc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.636699 containerd[1581]: time="2025-09-09T05:30:57.636634604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pr9cs,Uid:c163eb63-f9f8-4a04-b35c-215954baf990,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c7c9563ee81c539b071bcd3c49e709ed2f9b28703d57a9a6a4550d52e06cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.637280 kubelet[2782]: E0909 05:30:57.637227 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c7c9563ee81c539b071bcd3c49e709ed2f9b28703d57a9a6a4550d52e06cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.637504 kubelet[2782]: E0909 05:30:57.637446 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c7c9563ee81c539b071bcd3c49e709ed2f9b28703d57a9a6a4550d52e06cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pr9cs" Sep 9 05:30:57.637620 kubelet[2782]: E0909 05:30:57.637602 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c7c9563ee81c539b071bcd3c49e709ed2f9b28703d57a9a6a4550d52e06cc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pr9cs" Sep 9 05:30:57.637749 kubelet[2782]: E0909 05:30:57.637719 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pr9cs_kube-system(c163eb63-f9f8-4a04-b35c-215954baf990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pr9cs_kube-system(c163eb63-f9f8-4a04-b35c-215954baf990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48c7c9563ee81c539b071bcd3c49e709ed2f9b28703d57a9a6a4550d52e06cc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pr9cs" podUID="c163eb63-f9f8-4a04-b35c-215954baf990" Sep 9 05:30:57.650114 containerd[1581]: time="2025-09-09T05:30:57.650038929Z" level=error msg="Failed to destroy network for sandbox \"49ffb3ed3854f9550c7965740264aa8774ec68057bfc7ebea29997dceb805ef7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.654508 containerd[1581]: time="2025-09-09T05:30:57.654032383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7rchx,Uid:f711d909-bcc2-489e-b152-66373d005f4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ffb3ed3854f9550c7965740264aa8774ec68057bfc7ebea29997dceb805ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.655064 kubelet[2782]: E0909 05:30:57.654996 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ffb3ed3854f9550c7965740264aa8774ec68057bfc7ebea29997dceb805ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.655156 kubelet[2782]: E0909 05:30:57.655087 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ffb3ed3854f9550c7965740264aa8774ec68057bfc7ebea29997dceb805ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:30:57.655156 kubelet[2782]: E0909 05:30:57.655121 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ffb3ed3854f9550c7965740264aa8774ec68057bfc7ebea29997dceb805ef7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:30:57.656599 kubelet[2782]: E0909 05:30:57.655195 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-7rchx_calico-system(f711d909-bcc2-489e-b152-66373d005f4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-7rchx_calico-system(f711d909-bcc2-489e-b152-66373d005f4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49ffb3ed3854f9550c7965740264aa8774ec68057bfc7ebea29997dceb805ef7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-7rchx" podUID="f711d909-bcc2-489e-b152-66373d005f4b" Sep 9 05:30:57.667564 containerd[1581]: time="2025-09-09T05:30:57.667474451Z" level=error msg="Failed to destroy network for sandbox \"93e2941cafad76661ebd5c56f483457ece009e465f56133b178128e89a6b3b8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.672963 containerd[1581]: time="2025-09-09T05:30:57.672869164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-lgdcj,Uid:7e14dcbb-eeec-4f50-b6c8-5afcd301051c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e2941cafad76661ebd5c56f483457ece009e465f56133b178128e89a6b3b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.673251 kubelet[2782]: E0909 05:30:57.673205 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e2941cafad76661ebd5c56f483457ece009e465f56133b178128e89a6b3b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.673383 kubelet[2782]: E0909 05:30:57.673294 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e2941cafad76661ebd5c56f483457ece009e465f56133b178128e89a6b3b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d4f99bb7-lgdcj" Sep 9 05:30:57.673383 kubelet[2782]: E0909 05:30:57.673322 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93e2941cafad76661ebd5c56f483457ece009e465f56133b178128e89a6b3b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d4f99bb7-lgdcj" Sep 9 05:30:57.673478 kubelet[2782]: E0909 05:30:57.673378 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54d4f99bb7-lgdcj_calico-apiserver(7e14dcbb-eeec-4f50-b6c8-5afcd301051c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54d4f99bb7-lgdcj_calico-apiserver(7e14dcbb-eeec-4f50-b6c8-5afcd301051c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93e2941cafad76661ebd5c56f483457ece009e465f56133b178128e89a6b3b8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d4f99bb7-lgdcj" podUID="7e14dcbb-eeec-4f50-b6c8-5afcd301051c" Sep 9 05:30:57.674930 containerd[1581]: time="2025-09-09T05:30:57.674771065Z" level=error msg="Failed to destroy network for sandbox \"b38ca7b6b2c84ca45d7dcf35dcc96d6de81b39393c396b36d475e18fda868d79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.675772 containerd[1581]: time="2025-09-09T05:30:57.675714972Z" level=error msg="Failed to destroy network for sandbox \"c06e4fb6ee899af17021b777b605a00e77899a3208869a22a7512b09c7f5e4a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.678818 containerd[1581]: time="2025-09-09T05:30:57.678693746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b6b778d54-mgs42,Uid:bccf19d1-5313-4b91-9b56-838625d8205d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38ca7b6b2c84ca45d7dcf35dcc96d6de81b39393c396b36d475e18fda868d79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.679664 kubelet[2782]: E0909 05:30:57.679630 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38ca7b6b2c84ca45d7dcf35dcc96d6de81b39393c396b36d475e18fda868d79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.679795 kubelet[2782]: E0909 05:30:57.679774 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38ca7b6b2c84ca45d7dcf35dcc96d6de81b39393c396b36d475e18fda868d79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" Sep 9 05:30:57.680389 containerd[1581]: time="2025-09-09T05:30:57.680333378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-znvzn,Uid:673b51d3-fcbe-4c6d-adf0-cd744f582691,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06e4fb6ee899af17021b777b605a00e77899a3208869a22a7512b09c7f5e4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.680525 kubelet[2782]: E0909 05:30:57.680489 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06e4fb6ee899af17021b777b605a00e77899a3208869a22a7512b09c7f5e4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.680604 kubelet[2782]: E0909 05:30:57.680552 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06e4fb6ee899af17021b777b605a00e77899a3208869a22a7512b09c7f5e4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" Sep 9 05:30:57.680604 kubelet[2782]: E0909 05:30:57.680574 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06e4fb6ee899af17021b777b605a00e77899a3208869a22a7512b09c7f5e4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" Sep 9 05:30:57.680704 kubelet[2782]: E0909 05:30:57.680633 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-687ff77b4f-znvzn_calico-apiserver(673b51d3-fcbe-4c6d-adf0-cd744f582691)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-687ff77b4f-znvzn_calico-apiserver(673b51d3-fcbe-4c6d-adf0-cd744f582691)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c06e4fb6ee899af17021b777b605a00e77899a3208869a22a7512b09c7f5e4a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" podUID="673b51d3-fcbe-4c6d-adf0-cd744f582691" Sep 9 05:30:57.680843 kubelet[2782]: E0909 05:30:57.680506 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38ca7b6b2c84ca45d7dcf35dcc96d6de81b39393c396b36d475e18fda868d79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" Sep 9 05:30:57.681042 kubelet[2782]: E0909 05:30:57.681012 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b6b778d54-mgs42_calico-system(bccf19d1-5313-4b91-9b56-838625d8205d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b6b778d54-mgs42_calico-system(bccf19d1-5313-4b91-9b56-838625d8205d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38ca7b6b2c84ca45d7dcf35dcc96d6de81b39393c396b36d475e18fda868d79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" podUID="bccf19d1-5313-4b91-9b56-838625d8205d" Sep 9 05:30:57.685445 containerd[1581]: time="2025-09-09T05:30:57.685287301Z" level=error msg="Failed to destroy network for sandbox \"3fed1ded1eba37f15928eee5b7898d1ad7d3ce0d6a8fc9c8fbcd70fbc0ac87a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.689373 containerd[1581]: time="2025-09-09T05:30:57.689310512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jggbv,Uid:b7288b17-f0e8-4160-8e70-36c1a2ee883e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fed1ded1eba37f15928eee5b7898d1ad7d3ce0d6a8fc9c8fbcd70fbc0ac87a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.689921 kubelet[2782]: E0909 05:30:57.689876 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fed1ded1eba37f15928eee5b7898d1ad7d3ce0d6a8fc9c8fbcd70fbc0ac87a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.690169 kubelet[2782]: E0909 05:30:57.690076 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fed1ded1eba37f15928eee5b7898d1ad7d3ce0d6a8fc9c8fbcd70fbc0ac87a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jggbv" Sep 9 05:30:57.690169 kubelet[2782]: E0909 05:30:57.690132 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fed1ded1eba37f15928eee5b7898d1ad7d3ce0d6a8fc9c8fbcd70fbc0ac87a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jggbv" Sep 9 05:30:57.690463 kubelet[2782]: E0909 05:30:57.690348 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jggbv_kube-system(b7288b17-f0e8-4160-8e70-36c1a2ee883e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jggbv_kube-system(b7288b17-f0e8-4160-8e70-36c1a2ee883e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fed1ded1eba37f15928eee5b7898d1ad7d3ce0d6a8fc9c8fbcd70fbc0ac87a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jggbv" podUID="b7288b17-f0e8-4160-8e70-36c1a2ee883e" Sep 9 05:30:57.694817 containerd[1581]: time="2025-09-09T05:30:57.694733509Z" level=error msg="Failed to destroy network for sandbox \"5d853908dc6fc75239b681bcea98548962f979cac4d2be3c6fd37e9c399eb1dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.698362 containerd[1581]: time="2025-09-09T05:30:57.698281235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc4d8575b-zd2fs,Uid:ac492e22-85ae-4713-8f79-3c6dddbfcd9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d853908dc6fc75239b681bcea98548962f979cac4d2be3c6fd37e9c399eb1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.698988 kubelet[2782]: E0909 05:30:57.698917 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d853908dc6fc75239b681bcea98548962f979cac4d2be3c6fd37e9c399eb1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.699084 kubelet[2782]: E0909 05:30:57.699016 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d853908dc6fc75239b681bcea98548962f979cac4d2be3c6fd37e9c399eb1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc4d8575b-zd2fs" Sep 9 05:30:57.699084 kubelet[2782]: E0909 05:30:57.699044 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d853908dc6fc75239b681bcea98548962f979cac4d2be3c6fd37e9c399eb1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc4d8575b-zd2fs" Sep 9 05:30:57.699216 kubelet[2782]: E0909 05:30:57.699112 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc4d8575b-zd2fs_calico-system(ac492e22-85ae-4713-8f79-3c6dddbfcd9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc4d8575b-zd2fs_calico-system(ac492e22-85ae-4713-8f79-3c6dddbfcd9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d853908dc6fc75239b681bcea98548962f979cac4d2be3c6fd37e9c399eb1dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc4d8575b-zd2fs" podUID="ac492e22-85ae-4713-8f79-3c6dddbfcd9d" Sep 9 05:30:57.705643 containerd[1581]: time="2025-09-09T05:30:57.705573751Z" level=error msg="Failed to destroy network for sandbox \"fa484b94a9ab221508ae6d393a17f3c4314a2cfe6226abae3261923e53d85f5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.707207 containerd[1581]: time="2025-09-09T05:30:57.707152117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-f2pkg,Uid:b10d24f7-9257-4b62-a1a3-f25bb881f8eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa484b94a9ab221508ae6d393a17f3c4314a2cfe6226abae3261923e53d85f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.707456 kubelet[2782]: E0909 05:30:57.707409 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa484b94a9ab221508ae6d393a17f3c4314a2cfe6226abae3261923e53d85f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:30:57.707509 kubelet[2782]: E0909 05:30:57.707485 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa484b94a9ab221508ae6d393a17f3c4314a2cfe6226abae3261923e53d85f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" Sep 9 05:30:57.707557 kubelet[2782]: E0909 05:30:57.707513 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa484b94a9ab221508ae6d393a17f3c4314a2cfe6226abae3261923e53d85f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" Sep 9 05:30:57.707619 kubelet[2782]: E0909 05:30:57.707585 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54d4f99bb7-f2pkg_calico-apiserver(b10d24f7-9257-4b62-a1a3-f25bb881f8eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54d4f99bb7-f2pkg_calico-apiserver(b10d24f7-9257-4b62-a1a3-f25bb881f8eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa484b94a9ab221508ae6d393a17f3c4314a2cfe6226abae3261923e53d85f5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" podUID="b10d24f7-9257-4b62-a1a3-f25bb881f8eb" Sep 9 05:30:57.929214 systemd[1]: run-netns-cni\x2d2544aabf\x2d433f\x2dab52\x2dd65d\x2d9687f43d04db.mount: Deactivated successfully. Sep 9 05:30:57.929364 systemd[1]: run-netns-cni\x2ddc3051e1\x2d6ec6\x2d4632\x2d9059\x2d0cee10b58c31.mount: Deactivated successfully. Sep 9 05:30:57.929474 systemd[1]: run-netns-cni\x2d81b2cc8f\x2d6165\x2d4e94\x2dd12f\x2d21f6ddf6a877.mount: Deactivated successfully. Sep 9 05:31:08.246024 containerd[1581]: time="2025-09-09T05:31:08.245786596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc4d8575b-zd2fs,Uid:ac492e22-85ae-4713-8f79-3c6dddbfcd9d,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:08.246024 containerd[1581]: time="2025-09-09T05:31:08.245868371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nh7hc,Uid:8282d634-60a2-4800-bb3d-6e250193e296,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:09.245694 containerd[1581]: time="2025-09-09T05:31:09.245509231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-znvzn,Uid:673b51d3-fcbe-4c6d-adf0-cd744f582691,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:31:09.245694 containerd[1581]: time="2025-09-09T05:31:09.245575997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b6b778d54-mgs42,Uid:bccf19d1-5313-4b91-9b56-838625d8205d,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:09.245694 containerd[1581]: time="2025-09-09T05:31:09.245512016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-f2pkg,Uid:b10d24f7-9257-4b62-a1a3-f25bb881f8eb,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:31:09.245694 containerd[1581]: time="2025-09-09T05:31:09.245510142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7rchx,Uid:f711d909-bcc2-489e-b152-66373d005f4b,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:10.063773 containerd[1581]: time="2025-09-09T05:31:10.063665376Z" level=error msg="Failed to destroy network for sandbox \"4438dd95defb26142bc66fe498627c618a1aee058409a56e0c28aecd89cbd5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.070717 containerd[1581]: time="2025-09-09T05:31:10.070654036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc4d8575b-zd2fs,Uid:ac492e22-85ae-4713-8f79-3c6dddbfcd9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4438dd95defb26142bc66fe498627c618a1aee058409a56e0c28aecd89cbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.071508 kubelet[2782]: E0909 05:31:10.071181 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4438dd95defb26142bc66fe498627c618a1aee058409a56e0c28aecd89cbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.071508 kubelet[2782]: E0909 05:31:10.071259 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4438dd95defb26142bc66fe498627c618a1aee058409a56e0c28aecd89cbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc4d8575b-zd2fs" Sep 9 05:31:10.071508 kubelet[2782]: E0909 05:31:10.071281 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4438dd95defb26142bc66fe498627c618a1aee058409a56e0c28aecd89cbd5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc4d8575b-zd2fs" Sep 9 05:31:10.071965 kubelet[2782]: E0909 05:31:10.071328 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc4d8575b-zd2fs_calico-system(ac492e22-85ae-4713-8f79-3c6dddbfcd9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc4d8575b-zd2fs_calico-system(ac492e22-85ae-4713-8f79-3c6dddbfcd9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4438dd95defb26142bc66fe498627c618a1aee058409a56e0c28aecd89cbd5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc4d8575b-zd2fs" podUID="ac492e22-85ae-4713-8f79-3c6dddbfcd9d" Sep 9 05:31:10.082902 containerd[1581]: time="2025-09-09T05:31:10.082830617Z" level=error msg="Failed to destroy network for sandbox \"da1135f56301ff648e6065a1758e1c2b9a957b458d6d18cdb0d56bffa2033316\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.085308 containerd[1581]: time="2025-09-09T05:31:10.085247556Z" level=error msg="Failed to destroy network for sandbox \"4775939a2dde072bea189704364af4bf0bebd7ddd83d778e131d2f75f9211b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.089694 containerd[1581]: time="2025-09-09T05:31:10.089615491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nh7hc,Uid:8282d634-60a2-4800-bb3d-6e250193e296,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da1135f56301ff648e6065a1758e1c2b9a957b458d6d18cdb0d56bffa2033316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.091033 containerd[1581]: time="2025-09-09T05:31:10.089766958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b6b778d54-mgs42,Uid:bccf19d1-5313-4b91-9b56-838625d8205d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4775939a2dde072bea189704364af4bf0bebd7ddd83d778e131d2f75f9211b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.091102 kubelet[2782]: E0909 05:31:10.089943 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da1135f56301ff648e6065a1758e1c2b9a957b458d6d18cdb0d56bffa2033316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.091102 kubelet[2782]: E0909 05:31:10.090033 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da1135f56301ff648e6065a1758e1c2b9a957b458d6d18cdb0d56bffa2033316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:31:10.091102 kubelet[2782]: E0909 05:31:10.090068 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da1135f56301ff648e6065a1758e1c2b9a957b458d6d18cdb0d56bffa2033316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nh7hc" Sep 9 05:31:10.091233 kubelet[2782]: E0909 05:31:10.090128 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nh7hc_calico-system(8282d634-60a2-4800-bb3d-6e250193e296)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nh7hc_calico-system(8282d634-60a2-4800-bb3d-6e250193e296)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da1135f56301ff648e6065a1758e1c2b9a957b458d6d18cdb0d56bffa2033316\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nh7hc" podUID="8282d634-60a2-4800-bb3d-6e250193e296" Sep 9 05:31:10.092003 kubelet[2782]: E0909 05:31:10.091944 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4775939a2dde072bea189704364af4bf0bebd7ddd83d778e131d2f75f9211b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.092185 kubelet[2782]: E0909 05:31:10.092156 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4775939a2dde072bea189704364af4bf0bebd7ddd83d778e131d2f75f9211b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" Sep 9 05:31:10.092299 kubelet[2782]: E0909 05:31:10.092277 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4775939a2dde072bea189704364af4bf0bebd7ddd83d778e131d2f75f9211b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" Sep 9 05:31:10.092443 kubelet[2782]: E0909 05:31:10.092409 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b6b778d54-mgs42_calico-system(bccf19d1-5313-4b91-9b56-838625d8205d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b6b778d54-mgs42_calico-system(bccf19d1-5313-4b91-9b56-838625d8205d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4775939a2dde072bea189704364af4bf0bebd7ddd83d778e131d2f75f9211b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" podUID="bccf19d1-5313-4b91-9b56-838625d8205d" Sep 9 05:31:10.111232 containerd[1581]: time="2025-09-09T05:31:10.111154098Z" level=error msg="Failed to destroy network for sandbox \"9bda0ad47069b5f4cbb719abd9333713bec75ec63fa4a564128d5470a9fa4176\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.112187 containerd[1581]: time="2025-09-09T05:31:10.112132482Z" level=error msg="Failed to destroy network for sandbox \"175c483907f7f602424ef1c1050b5d77e4e2acf79e60a96183e43d5b8d521c77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.112440 containerd[1581]: time="2025-09-09T05:31:10.112403095Z" level=error msg="Failed to destroy network for sandbox \"02d492864b1f08b68f9091d8141ef1c6db30e3fe23d5bea18fceb523336f0db7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.113636 containerd[1581]: time="2025-09-09T05:31:10.113573482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-f2pkg,Uid:b10d24f7-9257-4b62-a1a3-f25bb881f8eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bda0ad47069b5f4cbb719abd9333713bec75ec63fa4a564128d5470a9fa4176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.113941 kubelet[2782]: E0909 05:31:10.113889 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bda0ad47069b5f4cbb719abd9333713bec75ec63fa4a564128d5470a9fa4176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.114031 kubelet[2782]: E0909 05:31:10.113972 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bda0ad47069b5f4cbb719abd9333713bec75ec63fa4a564128d5470a9fa4176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" Sep 9 05:31:10.114031 kubelet[2782]: E0909 05:31:10.113999 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bda0ad47069b5f4cbb719abd9333713bec75ec63fa4a564128d5470a9fa4176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" Sep 9 05:31:10.114152 kubelet[2782]: E0909 05:31:10.114053 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54d4f99bb7-f2pkg_calico-apiserver(b10d24f7-9257-4b62-a1a3-f25bb881f8eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54d4f99bb7-f2pkg_calico-apiserver(b10d24f7-9257-4b62-a1a3-f25bb881f8eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bda0ad47069b5f4cbb719abd9333713bec75ec63fa4a564128d5470a9fa4176\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" podUID="b10d24f7-9257-4b62-a1a3-f25bb881f8eb" Sep 9 05:31:10.115657 containerd[1581]: time="2025-09-09T05:31:10.115611463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7rchx,Uid:f711d909-bcc2-489e-b152-66373d005f4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"175c483907f7f602424ef1c1050b5d77e4e2acf79e60a96183e43d5b8d521c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.115993 kubelet[2782]: E0909 05:31:10.115933 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175c483907f7f602424ef1c1050b5d77e4e2acf79e60a96183e43d5b8d521c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.116057 kubelet[2782]: E0909 05:31:10.116024 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175c483907f7f602424ef1c1050b5d77e4e2acf79e60a96183e43d5b8d521c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:31:10.116095 kubelet[2782]: E0909 05:31:10.116053 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"175c483907f7f602424ef1c1050b5d77e4e2acf79e60a96183e43d5b8d521c77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7rchx" Sep 9 05:31:10.116144 kubelet[2782]: E0909 05:31:10.116112 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-7rchx_calico-system(f711d909-bcc2-489e-b152-66373d005f4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-7rchx_calico-system(f711d909-bcc2-489e-b152-66373d005f4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"175c483907f7f602424ef1c1050b5d77e4e2acf79e60a96183e43d5b8d521c77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-7rchx" podUID="f711d909-bcc2-489e-b152-66373d005f4b" Sep 9 05:31:10.117781 containerd[1581]: time="2025-09-09T05:31:10.117730307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-znvzn,Uid:673b51d3-fcbe-4c6d-adf0-cd744f582691,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d492864b1f08b68f9091d8141ef1c6db30e3fe23d5bea18fceb523336f0db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.117983 kubelet[2782]: E0909 05:31:10.117939 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d492864b1f08b68f9091d8141ef1c6db30e3fe23d5bea18fceb523336f0db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:31:10.118041 kubelet[2782]: E0909 05:31:10.118003 2782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d492864b1f08b68f9091d8141ef1c6db30e3fe23d5bea18fceb523336f0db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" Sep 9 05:31:10.118041 kubelet[2782]: E0909 05:31:10.118025 2782 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d492864b1f08b68f9091d8141ef1c6db30e3fe23d5bea18fceb523336f0db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" Sep 9 05:31:10.118113 kubelet[2782]: E0909 05:31:10.118089 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-687ff77b4f-znvzn_calico-apiserver(673b51d3-fcbe-4c6d-adf0-cd744f582691)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-687ff77b4f-znvzn_calico-apiserver(673b51d3-fcbe-4c6d-adf0-cd744f582691)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02d492864b1f08b68f9091d8141ef1c6db30e3fe23d5bea18fceb523336f0db7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" podUID="673b51d3-fcbe-4c6d-adf0-cd744f582691" Sep 9 05:31:10.248149 containerd[1581]: time="2025-09-09T05:31:10.248083125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:10.249045 containerd[1581]: time="2025-09-09T05:31:10.248985315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 05:31:10.250323 containerd[1581]: time="2025-09-09T05:31:10.250267754Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:10.252938 containerd[1581]: time="2025-09-09T05:31:10.252900482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:10.253227 containerd[1581]: time="2025-09-09T05:31:10.253205490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 12.64571298s" Sep 9 05:31:10.253272 containerd[1581]: time="2025-09-09T05:31:10.253229956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 05:31:10.263740 containerd[1581]: time="2025-09-09T05:31:10.263480477Z" level=info msg="CreateContainer within sandbox \"1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 05:31:10.276183 containerd[1581]: time="2025-09-09T05:31:10.276108772Z" level=info msg="Container 5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:10.293961 containerd[1581]: time="2025-09-09T05:31:10.293887697Z" level=info msg="CreateContainer within sandbox \"1472964d4801ee7bb454b0c1b8b9a821f8c39c07a6a2d279edf099aff2c0c7a1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\"" Sep 9 05:31:10.294855 containerd[1581]: time="2025-09-09T05:31:10.294804504Z" level=info msg="StartContainer for \"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\"" Sep 9 05:31:10.326640 containerd[1581]: time="2025-09-09T05:31:10.326447204Z" level=info msg="connecting to shim 5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132" address="unix:///run/containerd/s/8218c1513f4456df77d61dbf19c5092b8d506d58c5cda89006511f9768713931" protocol=ttrpc version=3 Sep 9 05:31:10.381000 systemd[1]: Started cri-containerd-5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132.scope - libcontainer container 5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132. Sep 9 05:31:10.455882 containerd[1581]: time="2025-09-09T05:31:10.455803363Z" level=info msg="StartContainer for \"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\" returns successfully" Sep 9 05:31:10.527945 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 05:31:10.528111 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 05:31:10.696082 kubelet[2782]: I0909 05:31:10.695848 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qwms7" podStartSLOduration=1.5783723570000001 podStartE2EDuration="32.695813387s" podCreationTimestamp="2025-09-09 05:30:38 +0000 UTC" firstStartedPulling="2025-09-09 05:30:39.136771929 +0000 UTC m=+17.254984018" lastFinishedPulling="2025-09-09 05:31:10.254212949 +0000 UTC m=+48.372425048" observedRunningTime="2025-09-09 05:31:10.690075276 +0000 UTC m=+48.808287365" watchObservedRunningTime="2025-09-09 05:31:10.695813387 +0000 UTC m=+48.814025476" Sep 9 05:31:10.885325 containerd[1581]: time="2025-09-09T05:31:10.885256568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\" id:\"878b841903d955b87934a09ceda860a8d934e76645e9045860bf30fcd3548a78\" pid:4126 exit_status:1 exited_at:{seconds:1757395870 nanos:884724039}" Sep 9 05:31:10.916284 systemd[1]: run-netns-cni\x2db8770b0f\x2da7fa\x2df1b2\x2da57e\x2de7cd8116c9bc.mount: Deactivated successfully. Sep 9 05:31:10.916653 systemd[1]: run-netns-cni\x2d8a2b5bff\x2d04b7\x2d4433\x2db348\x2da01dfae867f4.mount: Deactivated successfully. Sep 9 05:31:10.916730 systemd[1]: run-netns-cni\x2dc4504d93\x2d82ae\x2d8ce2\x2d7a94\x2d6ae8ae79fd04.mount: Deactivated successfully. Sep 9 05:31:10.916810 systemd[1]: run-netns-cni\x2d38cb51dd\x2d2458\x2d76df\x2d97ac\x2dc13835453753.mount: Deactivated successfully. Sep 9 05:31:10.916874 systemd[1]: run-netns-cni\x2dcf92f62f\x2df5f8\x2dc0cc\x2dae9f\x2dd50fc2cf298e.mount: Deactivated successfully. Sep 9 05:31:10.916942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992949778.mount: Deactivated successfully. Sep 9 05:31:10.955094 kubelet[2782]: I0909 05:31:10.954936 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-backend-key-pair\") pod \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\" (UID: \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\") " Sep 9 05:31:10.955094 kubelet[2782]: I0909 05:31:10.955025 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-ca-bundle\") pod \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\" (UID: \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\") " Sep 9 05:31:10.956016 kubelet[2782]: I0909 05:31:10.955973 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ac492e22-85ae-4713-8f79-3c6dddbfcd9d" (UID: "ac492e22-85ae-4713-8f79-3c6dddbfcd9d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:31:10.956091 kubelet[2782]: I0909 05:31:10.956050 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4ghg\" (UniqueName: \"kubernetes.io/projected/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-kube-api-access-w4ghg\") pod \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\" (UID: \"ac492e22-85ae-4713-8f79-3c6dddbfcd9d\") " Sep 9 05:31:10.956156 kubelet[2782]: I0909 05:31:10.956128 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 05:31:10.962215 kubelet[2782]: I0909 05:31:10.960803 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ac492e22-85ae-4713-8f79-3c6dddbfcd9d" (UID: "ac492e22-85ae-4713-8f79-3c6dddbfcd9d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:31:10.965577 kubelet[2782]: I0909 05:31:10.963484 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-kube-api-access-w4ghg" (OuterVolumeSpecName: "kube-api-access-w4ghg") pod "ac492e22-85ae-4713-8f79-3c6dddbfcd9d" (UID: "ac492e22-85ae-4713-8f79-3c6dddbfcd9d"). InnerVolumeSpecName "kube-api-access-w4ghg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:31:10.965117 systemd[1]: var-lib-kubelet-pods-ac492e22\x2d85ae\x2d4713\x2d8f79\x2d3c6dddbfcd9d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 05:31:10.970833 systemd[1]: var-lib-kubelet-pods-ac492e22\x2d85ae\x2d4713\x2d8f79\x2d3c6dddbfcd9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw4ghg.mount: Deactivated successfully. Sep 9 05:31:11.056622 kubelet[2782]: I0909 05:31:11.056567 2782 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4ghg\" (UniqueName: \"kubernetes.io/projected/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-kube-api-access-w4ghg\") on node \"localhost\" DevicePath \"\"" Sep 9 05:31:11.056622 kubelet[2782]: I0909 05:31:11.056612 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac492e22-85ae-4713-8f79-3c6dddbfcd9d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 05:31:11.244778 kubelet[2782]: E0909 05:31:11.244589 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:11.245296 containerd[1581]: time="2025-09-09T05:31:11.245103122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pr9cs,Uid:c163eb63-f9f8-4a04-b35c-215954baf990,Namespace:kube-system,Attempt:0,}" Sep 9 05:31:11.577325 systemd-networkd[1505]: cali6ae24b84286: Link UP Sep 9 05:31:11.578111 systemd-networkd[1505]: cali6ae24b84286: Gained carrier Sep 9 05:31:11.640947 systemd[1]: Removed slice kubepods-besteffort-podac492e22_85ae_4713_8f79_3c6dddbfcd9d.slice - libcontainer container kubepods-besteffort-podac492e22_85ae_4713_8f79_3c6dddbfcd9d.slice. Sep 9 05:31:11.668579 containerd[1581]: 2025-09-09 05:31:11.317 [INFO][4157] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:31:11.668579 containerd[1581]: 2025-09-09 05:31:11.413 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0 coredns-668d6bf9bc- kube-system c163eb63-f9f8-4a04-b35c-215954baf990 852 0 2025-09-09 05:30:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-pr9cs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ae24b84286 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-" Sep 9 05:31:11.668579 containerd[1581]: 2025-09-09 05:31:11.413 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" Sep 9 05:31:11.668579 containerd[1581]: 2025-09-09 05:31:11.477 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" HandleID="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Workload="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.477 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" HandleID="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Workload="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-pr9cs", "timestamp":"2025-09-09 05:31:11.477003655 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.477 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.477 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.478 [INFO][4170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.486 [INFO][4170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" host="localhost" Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.493 [INFO][4170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.498 [INFO][4170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.500 [INFO][4170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.502 [INFO][4170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:11.668874 containerd[1581]: 2025-09-09 05:31:11.502 [INFO][4170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" host="localhost" Sep 9 05:31:11.669134 containerd[1581]: 2025-09-09 05:31:11.504 [INFO][4170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1 Sep 9 05:31:11.669134 containerd[1581]: 2025-09-09 05:31:11.523 [INFO][4170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" host="localhost" Sep 9 05:31:11.669134 containerd[1581]: 2025-09-09 05:31:11.565 [INFO][4170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" host="localhost" Sep 9 05:31:11.669134 containerd[1581]: 2025-09-09 05:31:11.565 [INFO][4170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" host="localhost" Sep 9 05:31:11.669134 containerd[1581]: 2025-09-09 05:31:11.565 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:11.669134 containerd[1581]: 2025-09-09 05:31:11.565 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" HandleID="k8s-pod-network.7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Workload="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" Sep 9 05:31:11.669266 containerd[1581]: 2025-09-09 05:31:11.568 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c163eb63-f9f8-4a04-b35c-215954baf990", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-pr9cs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ae24b84286", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:11.669351 containerd[1581]: 2025-09-09 05:31:11.569 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" Sep 9 05:31:11.669351 containerd[1581]: 2025-09-09 05:31:11.569 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ae24b84286 ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" Sep 9 05:31:11.669351 containerd[1581]: 2025-09-09 05:31:11.578 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" Sep 9 05:31:11.669419 containerd[1581]: 2025-09-09 05:31:11.580 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c163eb63-f9f8-4a04-b35c-215954baf990", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1", Pod:"coredns-668d6bf9bc-pr9cs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ae24b84286", MAC:"a6:e8:47:50:72:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:11.669419 containerd[1581]: 2025-09-09 05:31:11.662 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-pr9cs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pr9cs-eth0" Sep 9 05:31:11.727107 containerd[1581]: time="2025-09-09T05:31:11.727053493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\" id:\"e2a9566d2d69a86372b465c47613711d30c26ac0ae6ea21e7c04835ec4674439\" pid:4193 exit_status:1 exited_at:{seconds:1757395871 nanos:726702157}" Sep 9 05:31:11.896258 containerd[1581]: time="2025-09-09T05:31:11.896084761Z" level=info msg="connecting to shim 7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1" address="unix:///run/containerd/s/b8adf79120cb3f6c92dfea45528f7782dcd6509bf4c220356c38f08a7cc880b1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:11.925806 systemd[1]: Started cri-containerd-7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1.scope - libcontainer container 7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1. Sep 9 05:31:11.942191 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:12.120089 containerd[1581]: time="2025-09-09T05:31:12.119701598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pr9cs,Uid:c163eb63-f9f8-4a04-b35c-215954baf990,Namespace:kube-system,Attempt:0,} returns sandbox id \"7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1\"" Sep 9 05:31:12.130900 kubelet[2782]: E0909 05:31:12.128851 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:12.133496 containerd[1581]: time="2025-09-09T05:31:12.133442184Z" level=info msg="CreateContainer within sandbox \"7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:31:12.153802 systemd[1]: Created slice kubepods-besteffort-pod92bbac85_5175_4ce1_a6c0_2785705446e6.slice - libcontainer container kubepods-besteffort-pod92bbac85_5175_4ce1_a6c0_2785705446e6.slice. Sep 9 05:31:12.245432 kubelet[2782]: E0909 05:31:12.245372 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:12.245994 containerd[1581]: time="2025-09-09T05:31:12.245471376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-lgdcj,Uid:7e14dcbb-eeec-4f50-b6c8-5afcd301051c,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:31:12.245994 containerd[1581]: time="2025-09-09T05:31:12.245799146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jggbv,Uid:b7288b17-f0e8-4160-8e70-36c1a2ee883e,Namespace:kube-system,Attempt:0,}" Sep 9 05:31:12.264888 kubelet[2782]: I0909 05:31:12.264816 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92bbac85-5175-4ce1-a6c0-2785705446e6-whisker-ca-bundle\") pod \"whisker-5b87c89b59-7zbmk\" (UID: \"92bbac85-5175-4ce1-a6c0-2785705446e6\") " pod="calico-system/whisker-5b87c89b59-7zbmk" Sep 9 05:31:12.264974 kubelet[2782]: I0909 05:31:12.264898 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92bbac85-5175-4ce1-a6c0-2785705446e6-whisker-backend-key-pair\") pod \"whisker-5b87c89b59-7zbmk\" (UID: \"92bbac85-5175-4ce1-a6c0-2785705446e6\") " pod="calico-system/whisker-5b87c89b59-7zbmk" Sep 9 05:31:12.264974 kubelet[2782]: I0909 05:31:12.264937 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfhb\" (UniqueName: \"kubernetes.io/projected/92bbac85-5175-4ce1-a6c0-2785705446e6-kube-api-access-kdfhb\") pod \"whisker-5b87c89b59-7zbmk\" (UID: \"92bbac85-5175-4ce1-a6c0-2785705446e6\") " pod="calico-system/whisker-5b87c89b59-7zbmk" Sep 9 05:31:12.376038 kubelet[2782]: I0909 05:31:12.375764 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac492e22-85ae-4713-8f79-3c6dddbfcd9d" path="/var/lib/kubelet/pods/ac492e22-85ae-4713-8f79-3c6dddbfcd9d/volumes" Sep 9 05:31:13.034823 systemd-networkd[1505]: cali6ae24b84286: Gained IPv6LL Sep 9 05:31:13.686182 containerd[1581]: time="2025-09-09T05:31:13.685957777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b87c89b59-7zbmk,Uid:92bbac85-5175-4ce1-a6c0-2785705446e6,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:13.923358 systemd-networkd[1505]: vxlan.calico: Link UP Sep 9 05:31:13.923370 systemd-networkd[1505]: vxlan.calico: Gained carrier Sep 9 05:31:15.042140 systemd-networkd[1505]: cali69cb70c762b: Link UP Sep 9 05:31:15.043364 systemd-networkd[1505]: cali69cb70c762b: Gained carrier Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.751 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0 calico-apiserver-54d4f99bb7- calico-apiserver 7e14dcbb-eeec-4f50-b6c8-5afcd301051c 854 0 2025-09-09 05:30:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54d4f99bb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54d4f99bb7-lgdcj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali69cb70c762b [] [] }} ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.761 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.817 [INFO][4477] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.817 [INFO][4477] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54d4f99bb7-lgdcj", "timestamp":"2025-09-09 05:31:14.817189386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.817 [INFO][4477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.817 [INFO][4477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.817 [INFO][4477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.824 [INFO][4477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.933 [INFO][4477] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.938 [INFO][4477] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.940 [INFO][4477] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.942 [INFO][4477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.942 [INFO][4477] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.943 [INFO][4477] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926 Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:14.976 [INFO][4477] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:15.035 [INFO][4477] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:15.035 [INFO][4477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" host="localhost" Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:15.035 [INFO][4477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:15.151697 containerd[1581]: 2025-09-09 05:31:15.035 [INFO][4477] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:15.153050 containerd[1581]: 2025-09-09 05:31:15.039 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0", GenerateName:"calico-apiserver-54d4f99bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e14dcbb-eeec-4f50-b6c8-5afcd301051c", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d4f99bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54d4f99bb7-lgdcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69cb70c762b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:15.153050 containerd[1581]: 2025-09-09 05:31:15.039 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:15.153050 containerd[1581]: 2025-09-09 05:31:15.039 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69cb70c762b ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:15.153050 containerd[1581]: 2025-09-09 05:31:15.044 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:15.153050 containerd[1581]: 2025-09-09 05:31:15.044 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0", GenerateName:"calico-apiserver-54d4f99bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e14dcbb-eeec-4f50-b6c8-5afcd301051c", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d4f99bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926", Pod:"calico-apiserver-54d4f99bb7-lgdcj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69cb70c762b", MAC:"de:00:f7:ea:02:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:15.153050 containerd[1581]: 2025-09-09 05:31:15.140 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-lgdcj" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:15.213959 systemd-networkd[1505]: cali080f7adb0ab: Link UP Sep 9 05:31:15.215440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849689955.mount: Deactivated successfully. Sep 9 05:31:15.216825 systemd-networkd[1505]: cali080f7adb0ab: Gained carrier Sep 9 05:31:15.220239 containerd[1581]: time="2025-09-09T05:31:15.220181536Z" level=info msg="Container c70848279d5f833a811d15a55b7a49e08fd27614fe9a0fe2588c7e23410105af: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:15.221991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477850286.mount: Deactivated successfully. Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:14.979 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jggbv-eth0 coredns-668d6bf9bc- kube-system b7288b17-f0e8-4160-8e70-36c1a2ee883e 861 0 2025-09-09 05:30:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jggbv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali080f7adb0ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:14.979 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.066 [INFO][4504] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" HandleID="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Workload="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.066 [INFO][4504] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" HandleID="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Workload="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052fd60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jggbv", "timestamp":"2025-09-09 05:31:15.066072919 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.066 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.066 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.066 [INFO][4504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.145 [INFO][4504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.150 [INFO][4504] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.156 [INFO][4504] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.159 [INFO][4504] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.162 [INFO][4504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.162 [INFO][4504] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.164 [INFO][4504] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.190 [INFO][4504] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.198 [INFO][4504] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.198 [INFO][4504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" host="localhost" Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.198 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:15.305890 containerd[1581]: 2025-09-09 05:31:15.198 [INFO][4504] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" HandleID="k8s-pod-network.997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Workload="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" Sep 9 05:31:15.306928 containerd[1581]: 2025-09-09 05:31:15.201 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jggbv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b7288b17-f0e8-4160-8e70-36c1a2ee883e", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jggbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali080f7adb0ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:15.306928 containerd[1581]: 2025-09-09 05:31:15.201 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" Sep 9 05:31:15.306928 containerd[1581]: 2025-09-09 05:31:15.201 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali080f7adb0ab ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" Sep 9 05:31:15.306928 containerd[1581]: 2025-09-09 05:31:15.217 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" Sep 9 05:31:15.306928 containerd[1581]: 2025-09-09 05:31:15.218 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jggbv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b7288b17-f0e8-4160-8e70-36c1a2ee883e", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c", Pod:"coredns-668d6bf9bc-jggbv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali080f7adb0ab", MAC:"ea:74:cf:39:7e:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:15.306928 containerd[1581]: 2025-09-09 05:31:15.300 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" Namespace="kube-system" Pod="coredns-668d6bf9bc-jggbv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jggbv-eth0" Sep 9 05:31:15.473879 systemd-networkd[1505]: caliee459c09277: Link UP Sep 9 05:31:15.474823 systemd-networkd[1505]: caliee459c09277: Gained carrier Sep 9 05:31:15.490523 containerd[1581]: time="2025-09-09T05:31:15.490458736Z" level=info msg="CreateContainer within sandbox \"7038623bc6417b661d3b1a8318bcaa38f9f0371de3efc6a60de1a56661ce80f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c70848279d5f833a811d15a55b7a49e08fd27614fe9a0fe2588c7e23410105af\"" Sep 9 05:31:15.495579 containerd[1581]: time="2025-09-09T05:31:15.494729656Z" level=info msg="connecting to shim 9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" address="unix:///run/containerd/s/06c1a9c1f69c1d12e7b7b379a64b1ef0ff38f895941fe7ef321a8a80d4f3d502" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:15.537570 containerd[1581]: time="2025-09-09T05:31:15.536029598Z" level=info msg="StartContainer for \"c70848279d5f833a811d15a55b7a49e08fd27614fe9a0fe2588c7e23410105af\"" Sep 9 05:31:15.543571 containerd[1581]: time="2025-09-09T05:31:15.541950981Z" level=info msg="connecting to shim c70848279d5f833a811d15a55b7a49e08fd27614fe9a0fe2588c7e23410105af" address="unix:///run/containerd/s/b8adf79120cb3f6c92dfea45528f7782dcd6509bf4c220356c38f08a7cc880b1" protocol=ttrpc version=3 Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.144 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b87c89b59--7zbmk-eth0 whisker-5b87c89b59- calico-system 92bbac85-5175-4ce1-a6c0-2785705446e6 952 0 2025-09-09 05:31:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b87c89b59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b87c89b59-7zbmk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliee459c09277 [] [] }} ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.146 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.186 [INFO][4530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" HandleID="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Workload="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.186 [INFO][4530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" HandleID="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Workload="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b87c89b59-7zbmk", "timestamp":"2025-09-09 05:31:15.186574658 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.186 [INFO][4530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.198 [INFO][4530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.198 [INFO][4530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.301 [INFO][4530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.308 [INFO][4530] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.313 [INFO][4530] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.317 [INFO][4530] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.321 [INFO][4530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.321 [INFO][4530] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.323 [INFO][4530] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0 Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.388 [INFO][4530] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.467 [INFO][4530] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.467 [INFO][4530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" host="localhost" Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.467 [INFO][4530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:15.545771 containerd[1581]: 2025-09-09 05:31:15.467 [INFO][4530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" HandleID="k8s-pod-network.3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Workload="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" Sep 9 05:31:15.546324 containerd[1581]: 2025-09-09 05:31:15.470 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b87c89b59--7zbmk-eth0", GenerateName:"whisker-5b87c89b59-", Namespace:"calico-system", SelfLink:"", UID:"92bbac85-5175-4ce1-a6c0-2785705446e6", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b87c89b59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b87c89b59-7zbmk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliee459c09277", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:15.546324 containerd[1581]: 2025-09-09 05:31:15.470 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" Sep 9 05:31:15.546324 containerd[1581]: 2025-09-09 05:31:15.470 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee459c09277 ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" Sep 9 05:31:15.546324 containerd[1581]: 2025-09-09 05:31:15.475 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" Sep 9 05:31:15.546324 containerd[1581]: 2025-09-09 05:31:15.475 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b87c89b59--7zbmk-eth0", GenerateName:"whisker-5b87c89b59-", Namespace:"calico-system", SelfLink:"", UID:"92bbac85-5175-4ce1-a6c0-2785705446e6", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 31, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b87c89b59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0", Pod:"whisker-5b87c89b59-7zbmk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliee459c09277", MAC:"f2:16:6d:6b:b2:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:15.546324 containerd[1581]: 2025-09-09 05:31:15.502 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" Namespace="calico-system" Pod="whisker-5b87c89b59-7zbmk" WorkloadEndpoint="localhost-k8s-whisker--5b87c89b59--7zbmk-eth0" Sep 9 05:31:15.568494 containerd[1581]: time="2025-09-09T05:31:15.568172647Z" level=info msg="connecting to shim 997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c" address="unix:///run/containerd/s/e2c1c837104829617f1bc5ce5c69733e3014807cfef2829d3bc600c9851ba727" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:15.580237 systemd[1]: Started cri-containerd-9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926.scope - libcontainer container 9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926. Sep 9 05:31:15.611673 containerd[1581]: time="2025-09-09T05:31:15.611576861Z" level=info msg="connecting to shim 3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0" address="unix:///run/containerd/s/228792618c04fa57f0e5cd9fc0cf7c21d1e17ce473d40b0fecbf6eacf42d07ea" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:15.614921 systemd[1]: Started cri-containerd-c70848279d5f833a811d15a55b7a49e08fd27614fe9a0fe2588c7e23410105af.scope - libcontainer container c70848279d5f833a811d15a55b7a49e08fd27614fe9a0fe2588c7e23410105af. Sep 9 05:31:15.628615 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:15.642024 systemd[1]: Started cri-containerd-997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c.scope - libcontainer container 997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c. Sep 9 05:31:15.648465 systemd[1]: Started cri-containerd-3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0.scope - libcontainer container 3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0. Sep 9 05:31:15.671042 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:15.684764 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:15.695050 containerd[1581]: time="2025-09-09T05:31:15.695002389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-lgdcj,Uid:7e14dcbb-eeec-4f50-b6c8-5afcd301051c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\"" Sep 9 05:31:15.702300 containerd[1581]: time="2025-09-09T05:31:15.702248327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:31:15.713805 containerd[1581]: time="2025-09-09T05:31:15.713747811Z" level=info msg="StartContainer for \"c70848279d5f833a811d15a55b7a49e08fd27614fe9a0fe2588c7e23410105af\" returns successfully" Sep 9 05:31:15.723499 containerd[1581]: time="2025-09-09T05:31:15.723424519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jggbv,Uid:b7288b17-f0e8-4160-8e70-36c1a2ee883e,Namespace:kube-system,Attempt:0,} returns sandbox id \"997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c\"" Sep 9 05:31:15.724398 kubelet[2782]: E0909 05:31:15.724347 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:15.730536 containerd[1581]: time="2025-09-09T05:31:15.730483602Z" level=info msg="CreateContainer within sandbox \"997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:31:15.751721 containerd[1581]: time="2025-09-09T05:31:15.751632223Z" level=info msg="Container 0a784b3a3cabbbe2300435ac829c334a74071885a8bdf3e37944dd8460c21b5e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:15.755437 containerd[1581]: time="2025-09-09T05:31:15.755388179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b87c89b59-7zbmk,Uid:92bbac85-5175-4ce1-a6c0-2785705446e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0\"" Sep 9 05:31:15.772347 containerd[1581]: time="2025-09-09T05:31:15.772295304Z" level=info msg="CreateContainer within sandbox \"997efbe297a6c2f4a9b8c689c1d0a4593e532e41ae9ffae48bffaf503c78496c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a784b3a3cabbbe2300435ac829c334a74071885a8bdf3e37944dd8460c21b5e\"" Sep 9 05:31:15.773742 containerd[1581]: time="2025-09-09T05:31:15.773702005Z" level=info msg="StartContainer for \"0a784b3a3cabbbe2300435ac829c334a74071885a8bdf3e37944dd8460c21b5e\"" Sep 9 05:31:15.775253 containerd[1581]: time="2025-09-09T05:31:15.775221710Z" level=info msg="connecting to shim 0a784b3a3cabbbe2300435ac829c334a74071885a8bdf3e37944dd8460c21b5e" address="unix:///run/containerd/s/e2c1c837104829617f1bc5ce5c69733e3014807cfef2829d3bc600c9851ba727" protocol=ttrpc version=3 Sep 9 05:31:15.805761 systemd[1]: Started cri-containerd-0a784b3a3cabbbe2300435ac829c334a74071885a8bdf3e37944dd8460c21b5e.scope - libcontainer container 0a784b3a3cabbbe2300435ac829c334a74071885a8bdf3e37944dd8460c21b5e. Sep 9 05:31:15.914881 systemd-networkd[1505]: vxlan.calico: Gained IPv6LL Sep 9 05:31:16.106107 containerd[1581]: time="2025-09-09T05:31:16.106018880Z" level=info msg="StartContainer for \"0a784b3a3cabbbe2300435ac829c334a74071885a8bdf3e37944dd8460c21b5e\" returns successfully" Sep 9 05:31:16.426842 systemd-networkd[1505]: cali080f7adb0ab: Gained IPv6LL Sep 9 05:31:16.675385 kubelet[2782]: E0909 05:31:16.675313 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:16.675643 kubelet[2782]: E0909 05:31:16.675430 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:16.682859 systemd-networkd[1505]: cali69cb70c762b: Gained IPv6LL Sep 9 05:31:16.746909 systemd-networkd[1505]: caliee459c09277: Gained IPv6LL Sep 9 05:31:17.674162 kubelet[2782]: E0909 05:31:17.673882 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:17.674162 kubelet[2782]: E0909 05:31:17.674045 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:18.363609 kubelet[2782]: I0909 05:31:18.363259 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pr9cs" podStartSLOduration=52.363235147 podStartE2EDuration="52.363235147s" podCreationTimestamp="2025-09-09 05:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:31:17.434330014 +0000 UTC m=+55.552542104" watchObservedRunningTime="2025-09-09 05:31:18.363235147 +0000 UTC m=+56.481447236" Sep 9 05:31:18.363842 kubelet[2782]: I0909 05:31:18.363647 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jggbv" podStartSLOduration=52.363639341 podStartE2EDuration="52.363639341s" podCreationTimestamp="2025-09-09 05:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:31:18.363459992 +0000 UTC m=+56.481672101" watchObservedRunningTime="2025-09-09 05:31:18.363639341 +0000 UTC m=+56.481851430" Sep 9 05:31:18.676086 kubelet[2782]: E0909 05:31:18.675945 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:18.676086 kubelet[2782]: E0909 05:31:18.675963 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:21.245333 containerd[1581]: time="2025-09-09T05:31:21.245266086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-znvzn,Uid:673b51d3-fcbe-4c6d-adf0-cd744f582691,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:31:21.627768 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:37196.service - OpenSSH per-connection server daemon (10.0.0.1:37196). Sep 9 05:31:21.844616 sshd[4802]: Accepted publickey for core from 10.0.0.1 port 37196 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:21.870022 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:21.880091 systemd-logind[1560]: New session 8 of user core. Sep 9 05:31:21.894741 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:31:22.246082 containerd[1581]: time="2025-09-09T05:31:22.245936007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-f2pkg,Uid:b10d24f7-9257-4b62-a1a3-f25bb881f8eb,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:31:22.246926 containerd[1581]: time="2025-09-09T05:31:22.246266091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nh7hc,Uid:8282d634-60a2-4800-bb3d-6e250193e296,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:22.586212 sshd[4807]: Connection closed by 10.0.0.1 port 37196 Sep 9 05:31:22.586687 sshd-session[4802]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:22.591154 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:37196.service: Deactivated successfully. Sep 9 05:31:22.593267 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:31:22.594131 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:31:22.595326 systemd-logind[1560]: Removed session 8. Sep 9 05:31:23.242615 systemd-networkd[1505]: cali341a0674104: Link UP Sep 9 05:31:23.243776 systemd-networkd[1505]: cali341a0674104: Gained carrier Sep 9 05:31:23.292357 containerd[1581]: time="2025-09-09T05:31:23.292290637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b6b778d54-mgs42,Uid:bccf19d1-5313-4b91-9b56-838625d8205d,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.541 [INFO][4792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0 calico-apiserver-687ff77b4f- calico-apiserver 673b51d3-fcbe-4c6d-adf0-cd744f582691 859 0 2025-09-09 05:30:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:687ff77b4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-687ff77b4f-znvzn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali341a0674104 [] [] }} ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.541 [INFO][4792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.860 [INFO][4863] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" HandleID="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Workload="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.860 [INFO][4863] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" HandleID="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Workload="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-687ff77b4f-znvzn", "timestamp":"2025-09-09 05:31:22.860649117 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.860 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.860 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.860 [INFO][4863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:22.871 [INFO][4863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.047 [INFO][4863] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.077 [INFO][4863] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.118 [INFO][4863] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.121 [INFO][4863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.121 [INFO][4863] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.136 [INFO][4863] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58 Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.216 [INFO][4863] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.235 [INFO][4863] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.235 [INFO][4863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" host="localhost" Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.235 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:23.309281 containerd[1581]: 2025-09-09 05:31:23.235 [INFO][4863] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" HandleID="k8s-pod-network.387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Workload="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" Sep 9 05:31:23.313089 containerd[1581]: 2025-09-09 05:31:23.239 [INFO][4792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0", GenerateName:"calico-apiserver-687ff77b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"673b51d3-fcbe-4c6d-adf0-cd744f582691", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687ff77b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-687ff77b4f-znvzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali341a0674104", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:23.313089 containerd[1581]: 2025-09-09 05:31:23.239 [INFO][4792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" Sep 9 05:31:23.313089 containerd[1581]: 2025-09-09 05:31:23.239 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali341a0674104 ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" Sep 9 05:31:23.313089 containerd[1581]: 2025-09-09 05:31:23.243 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" Sep 9 05:31:23.313089 containerd[1581]: 2025-09-09 05:31:23.247 [INFO][4792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0", GenerateName:"calico-apiserver-687ff77b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"673b51d3-fcbe-4c6d-adf0-cd744f582691", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687ff77b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58", Pod:"calico-apiserver-687ff77b4f-znvzn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali341a0674104", MAC:"9e:70:20:9e:9c:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:23.313089 containerd[1581]: 2025-09-09 05:31:23.305 [INFO][4792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-znvzn" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--znvzn-eth0" Sep 9 05:31:23.417931 systemd-networkd[1505]: cali42a93a1328b: Link UP Sep 9 05:31:23.419060 systemd-networkd[1505]: cali42a93a1328b: Gained carrier Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:22.818 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nh7hc-eth0 csi-node-driver- calico-system 8282d634-60a2-4800-bb3d-6e250193e296 713 0 2025-09-09 05:30:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nh7hc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali42a93a1328b [] [] }} ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:22.819 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-eth0" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:22.884 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" HandleID="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Workload="localhost-k8s-csi--node--driver--nh7hc-eth0" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:22.885 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" HandleID="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Workload="localhost-k8s-csi--node--driver--nh7hc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000357630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nh7hc", "timestamp":"2025-09-09 05:31:22.884947198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:22.885 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.235 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.236 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.246 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.309 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.354 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.357 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.360 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.360 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.362 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464 Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.374 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.410 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.410 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" host="localhost" Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.411 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:23.665006 containerd[1581]: 2025-09-09 05:31:23.411 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" HandleID="k8s-pod-network.ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Workload="localhost-k8s-csi--node--driver--nh7hc-eth0" Sep 9 05:31:23.665830 containerd[1581]: 2025-09-09 05:31:23.415 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nh7hc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8282d634-60a2-4800-bb3d-6e250193e296", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nh7hc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42a93a1328b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:23.665830 containerd[1581]: 2025-09-09 05:31:23.415 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-eth0" Sep 9 05:31:23.665830 containerd[1581]: 2025-09-09 05:31:23.415 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42a93a1328b ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-eth0" Sep 9 05:31:23.665830 containerd[1581]: 2025-09-09 05:31:23.419 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-eth0" Sep 9 05:31:23.665830 containerd[1581]: 2025-09-09 05:31:23.423 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nh7hc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8282d634-60a2-4800-bb3d-6e250193e296", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464", Pod:"csi-node-driver-nh7hc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42a93a1328b", MAC:"ca:e3:40:13:05:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:23.665830 containerd[1581]: 2025-09-09 05:31:23.659 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" Namespace="calico-system" Pod="csi-node-driver-nh7hc" WorkloadEndpoint="localhost-k8s-csi--node--driver--nh7hc-eth0" Sep 9 05:31:23.917967 systemd-networkd[1505]: cali160f3464779: Link UP Sep 9 05:31:23.918647 systemd-networkd[1505]: cali160f3464779: Gained carrier Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:22.848 [INFO][4836] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0 calico-apiserver-54d4f99bb7- calico-apiserver b10d24f7-9257-4b62-a1a3-f25bb881f8eb 845 0 2025-09-09 05:30:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54d4f99bb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54d4f99bb7-f2pkg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali160f3464779 [] [] }} ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:22.849 [INFO][4836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.067 [INFO][4878] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.067 [INFO][4878] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54d4f99bb7-f2pkg", "timestamp":"2025-09-09 05:31:23.067378554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.067 [INFO][4878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.411 [INFO][4878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.411 [INFO][4878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.421 [INFO][4878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.433 [INFO][4878] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.685 [INFO][4878] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.689 [INFO][4878] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.694 [INFO][4878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.694 [INFO][4878] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.697 [INFO][4878] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.729 [INFO][4878] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.910 [INFO][4878] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.910 [INFO][4878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" host="localhost" Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.910 [INFO][4878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:23.972533 containerd[1581]: 2025-09-09 05:31:23.910 [INFO][4878] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:23.973788 containerd[1581]: 2025-09-09 05:31:23.913 [INFO][4836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0", GenerateName:"calico-apiserver-54d4f99bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b10d24f7-9257-4b62-a1a3-f25bb881f8eb", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d4f99bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54d4f99bb7-f2pkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali160f3464779", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:23.973788 containerd[1581]: 2025-09-09 05:31:23.913 [INFO][4836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:23.973788 containerd[1581]: 2025-09-09 05:31:23.913 [INFO][4836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali160f3464779 ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:23.973788 containerd[1581]: 2025-09-09 05:31:23.919 [INFO][4836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:23.973788 containerd[1581]: 2025-09-09 05:31:23.920 [INFO][4836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0", GenerateName:"calico-apiserver-54d4f99bb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"b10d24f7-9257-4b62-a1a3-f25bb881f8eb", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d4f99bb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f", Pod:"calico-apiserver-54d4f99bb7-f2pkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali160f3464779", MAC:"32:c2:95:b1:13:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:23.973788 containerd[1581]: 2025-09-09 05:31:23.967 [INFO][4836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Namespace="calico-apiserver" Pod="calico-apiserver-54d4f99bb7-f2pkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:24.035585 containerd[1581]: time="2025-09-09T05:31:24.034412852Z" level=info msg="connecting to shim 387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58" address="unix:///run/containerd/s/e3b736f8870c4b2feeb7e95752f2aad40337bf0bc912262e81738d7945a466c1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:24.072875 systemd[1]: Started cri-containerd-387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58.scope - libcontainer container 387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58. Sep 9 05:31:24.090934 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:24.297744 systemd-networkd[1505]: cali0558d1e8d2b: Link UP Sep 9 05:31:24.299701 systemd-networkd[1505]: cali0558d1e8d2b: Gained carrier Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.659 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0 calico-kube-controllers-b6b778d54- calico-system bccf19d1-5313-4b91-9b56-838625d8205d 857 0 2025-09-09 05:30:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b6b778d54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b6b778d54-mgs42 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0558d1e8d2b [] [] }} ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.659 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.712 [INFO][4930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" HandleID="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Workload="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.712 [INFO][4930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" HandleID="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Workload="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b6b778d54-mgs42", "timestamp":"2025-09-09 05:31:23.712790503 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.713 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.910 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.910 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.919 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.967 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.978 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.983 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.987 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:23.987 [INFO][4930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:24.117 [INFO][4930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:24.143 [INFO][4930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:24.289 [INFO][4930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:24.289 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" host="localhost" Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:24.289 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:24.370905 containerd[1581]: 2025-09-09 05:31:24.289 [INFO][4930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" HandleID="k8s-pod-network.da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Workload="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" Sep 9 05:31:24.372132 containerd[1581]: 2025-09-09 05:31:24.292 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0", GenerateName:"calico-kube-controllers-b6b778d54-", Namespace:"calico-system", SelfLink:"", UID:"bccf19d1-5313-4b91-9b56-838625d8205d", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b6b778d54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b6b778d54-mgs42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0558d1e8d2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:24.372132 containerd[1581]: 2025-09-09 05:31:24.293 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" Sep 9 05:31:24.372132 containerd[1581]: 2025-09-09 05:31:24.293 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0558d1e8d2b ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" Sep 9 05:31:24.372132 containerd[1581]: 2025-09-09 05:31:24.300 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" Sep 9 05:31:24.372132 containerd[1581]: 2025-09-09 05:31:24.300 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0", GenerateName:"calico-kube-controllers-b6b778d54-", Namespace:"calico-system", SelfLink:"", UID:"bccf19d1-5313-4b91-9b56-838625d8205d", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b6b778d54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece", Pod:"calico-kube-controllers-b6b778d54-mgs42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0558d1e8d2b", MAC:"d2:02:8b:15:5e:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:24.372132 containerd[1581]: 2025-09-09 05:31:24.366 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" Namespace="calico-system" Pod="calico-kube-controllers-b6b778d54-mgs42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b6b778d54--mgs42-eth0" Sep 9 05:31:25.002831 systemd-networkd[1505]: cali341a0674104: Gained IPv6LL Sep 9 05:31:25.194816 systemd-networkd[1505]: cali160f3464779: Gained IPv6LL Sep 9 05:31:25.204138 containerd[1581]: time="2025-09-09T05:31:25.204071689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-znvzn,Uid:673b51d3-fcbe-4c6d-adf0-cd744f582691,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58\"" Sep 9 05:31:25.245855 containerd[1581]: time="2025-09-09T05:31:25.245784440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7rchx,Uid:f711d909-bcc2-489e-b152-66373d005f4b,Namespace:calico-system,Attempt:0,}" Sep 9 05:31:25.386804 systemd-networkd[1505]: cali42a93a1328b: Gained IPv6LL Sep 9 05:31:26.026740 systemd-networkd[1505]: cali0558d1e8d2b: Gained IPv6LL Sep 9 05:31:26.149662 systemd-networkd[1505]: cali71bf294ee17: Link UP Sep 9 05:31:26.151277 systemd-networkd[1505]: cali71bf294ee17: Gained carrier Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.860 [INFO][5008] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--7rchx-eth0 goldmane-54d579b49d- calico-system f711d909-bcc2-489e-b152-66373d005f4b 858 0 2025-09-09 05:30:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-7rchx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali71bf294ee17 [] [] }} ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.860 [INFO][5008] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.886 [INFO][5022] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" HandleID="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Workload="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.886 [INFO][5022] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" HandleID="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Workload="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-7rchx", "timestamp":"2025-09-09 05:31:25.886241309 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.886 [INFO][5022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.886 [INFO][5022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.886 [INFO][5022] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.892 [INFO][5022] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.897 [INFO][5022] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.901 [INFO][5022] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.903 [INFO][5022] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.905 [INFO][5022] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.905 [INFO][5022] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.906 [INFO][5022] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854 Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:25.941 [INFO][5022] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:26.142 [INFO][5022] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:26.143 [INFO][5022] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" host="localhost" Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:26.143 [INFO][5022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:26.373255 containerd[1581]: 2025-09-09 05:31:26.143 [INFO][5022] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" HandleID="k8s-pod-network.542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Workload="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" Sep 9 05:31:26.422632 containerd[1581]: 2025-09-09 05:31:26.146 [INFO][5008] cni-plugin/k8s.go 418: Populated endpoint ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7rchx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f711d909-bcc2-489e-b152-66373d005f4b", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-7rchx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali71bf294ee17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:26.422632 containerd[1581]: 2025-09-09 05:31:26.146 [INFO][5008] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" Sep 9 05:31:26.422632 containerd[1581]: 2025-09-09 05:31:26.146 [INFO][5008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71bf294ee17 ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" Sep 9 05:31:26.422632 containerd[1581]: 2025-09-09 05:31:26.152 [INFO][5008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" Sep 9 05:31:26.422632 containerd[1581]: 2025-09-09 05:31:26.152 [INFO][5008] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7rchx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f711d909-bcc2-489e-b152-66373d005f4b", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854", Pod:"goldmane-54d579b49d-7rchx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali71bf294ee17", MAC:"6e:72:6f:a0:5a:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:26.422632 containerd[1581]: 2025-09-09 05:31:26.362 [INFO][5008] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" Namespace="calico-system" Pod="goldmane-54d579b49d-7rchx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7rchx-eth0" Sep 9 05:31:27.601742 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:37204.service - OpenSSH per-connection server daemon (10.0.0.1:37204). Sep 9 05:31:27.762297 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 37204 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:27.764507 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:27.776627 systemd-logind[1560]: New session 9 of user core. Sep 9 05:31:27.782702 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:31:27.971512 sshd[5052]: Connection closed by 10.0.0.1 port 37204 Sep 9 05:31:27.972009 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:27.977341 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:37204.service: Deactivated successfully. Sep 9 05:31:27.979588 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:31:27.980438 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:31:27.982124 systemd-logind[1560]: Removed session 9. Sep 9 05:31:28.010903 systemd-networkd[1505]: cali71bf294ee17: Gained IPv6LL Sep 9 05:31:28.179304 containerd[1581]: time="2025-09-09T05:31:28.179236498Z" level=info msg="connecting to shim ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464" address="unix:///run/containerd/s/55fc07d27bce5be986fa656577af26aa32b63a291133959f9b84e22dc385e6be" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:28.240928 systemd[1]: Started cri-containerd-ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464.scope - libcontainer container ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464. Sep 9 05:31:28.258236 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:29.071745 containerd[1581]: time="2025-09-09T05:31:29.071685414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nh7hc,Uid:8282d634-60a2-4800-bb3d-6e250193e296,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464\"" Sep 9 05:31:29.429784 containerd[1581]: time="2025-09-09T05:31:29.429632178Z" level=info msg="connecting to shim 58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" address="unix:///run/containerd/s/86ac22ac6e24c48533f99a666f8cfe78538eba997b4e3343e4c1bc7b1f589769" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:29.461748 systemd[1]: Started cri-containerd-58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f.scope - libcontainer container 58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f. Sep 9 05:31:29.475808 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:29.513360 containerd[1581]: time="2025-09-09T05:31:29.513299819Z" level=info msg="connecting to shim da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece" address="unix:///run/containerd/s/663bf43d9b7a22be6f4e77a468f7b0c8549d833a8a931a8225bd7e1b1a30a200" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:29.553985 containerd[1581]: time="2025-09-09T05:31:29.553676162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d4f99bb7-f2pkg,Uid:b10d24f7-9257-4b62-a1a3-f25bb881f8eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\"" Sep 9 05:31:29.554228 systemd[1]: Started cri-containerd-da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece.scope - libcontainer container da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece. Sep 9 05:31:29.592023 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:29.661891 containerd[1581]: time="2025-09-09T05:31:29.661832479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b6b778d54-mgs42,Uid:bccf19d1-5313-4b91-9b56-838625d8205d,Namespace:calico-system,Attempt:0,} returns sandbox id \"da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece\"" Sep 9 05:31:29.664830 containerd[1581]: time="2025-09-09T05:31:29.664724965Z" level=info msg="connecting to shim 542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854" address="unix:///run/containerd/s/d69745116b58b8d5e1662daa46a128ddbff9a17643d45571d444676b5c37a8fb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:29.672008 containerd[1581]: time="2025-09-09T05:31:29.671953138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:29.674952 containerd[1581]: time="2025-09-09T05:31:29.674896970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 05:31:29.684800 containerd[1581]: time="2025-09-09T05:31:29.684594750Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:29.695804 systemd[1]: Started cri-containerd-542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854.scope - libcontainer container 542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854. Sep 9 05:31:29.701006 containerd[1581]: time="2025-09-09T05:31:29.700901716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:29.701853 containerd[1581]: time="2025-09-09T05:31:29.701783018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 13.999478727s" Sep 9 05:31:29.701929 containerd[1581]: time="2025-09-09T05:31:29.701900089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:31:29.704883 containerd[1581]: time="2025-09-09T05:31:29.704751788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 05:31:29.706466 containerd[1581]: time="2025-09-09T05:31:29.706443478Z" level=info msg="CreateContainer within sandbox \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:31:29.719354 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:29.731039 containerd[1581]: time="2025-09-09T05:31:29.730965838Z" level=info msg="Container fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:29.766168 containerd[1581]: time="2025-09-09T05:31:29.766092579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7rchx,Uid:f711d909-bcc2-489e-b152-66373d005f4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854\"" Sep 9 05:31:29.779156 containerd[1581]: time="2025-09-09T05:31:29.779077651Z" level=info msg="CreateContainer within sandbox \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\"" Sep 9 05:31:29.780068 containerd[1581]: time="2025-09-09T05:31:29.780010559Z" level=info msg="StartContainer for \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\"" Sep 9 05:31:29.781249 containerd[1581]: time="2025-09-09T05:31:29.781223457Z" level=info msg="connecting to shim fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112" address="unix:///run/containerd/s/06c1a9c1f69c1d12e7b7b379a64b1ef0ff38f895941fe7ef321a8a80d4f3d502" protocol=ttrpc version=3 Sep 9 05:31:29.819904 systemd[1]: Started cri-containerd-fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112.scope - libcontainer container fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112. Sep 9 05:31:30.036251 containerd[1581]: time="2025-09-09T05:31:30.036171380Z" level=info msg="StartContainer for \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" returns successfully" Sep 9 05:31:32.933309 kubelet[2782]: I0909 05:31:32.932796 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54d4f99bb7-lgdcj" podStartSLOduration=43.931541364 podStartE2EDuration="57.932768s" podCreationTimestamp="2025-09-09 05:30:35 +0000 UTC" firstStartedPulling="2025-09-09 05:31:15.701863528 +0000 UTC m=+53.820075617" lastFinishedPulling="2025-09-09 05:31:29.703090164 +0000 UTC m=+67.821302253" observedRunningTime="2025-09-09 05:31:30.798734024 +0000 UTC m=+68.916946113" watchObservedRunningTime="2025-09-09 05:31:32.932768 +0000 UTC m=+71.050980089" Sep 9 05:31:32.989580 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:53206.service - OpenSSH per-connection server daemon (10.0.0.1:53206). Sep 9 05:31:33.070023 sshd[5293]: Accepted publickey for core from 10.0.0.1 port 53206 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:33.073173 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:33.085099 systemd-logind[1560]: New session 10 of user core. Sep 9 05:31:33.093893 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:31:33.370902 sshd[5302]: Connection closed by 10.0.0.1 port 53206 Sep 9 05:31:33.371297 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:33.376923 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:53206.service: Deactivated successfully. Sep 9 05:31:33.379272 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:31:33.380497 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:31:33.382814 systemd-logind[1560]: Removed session 10. Sep 9 05:31:33.733854 containerd[1581]: time="2025-09-09T05:31:33.733705651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:33.854637 containerd[1581]: time="2025-09-09T05:31:33.854560319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 05:31:33.947605 containerd[1581]: time="2025-09-09T05:31:33.947511095Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:34.013413 containerd[1581]: time="2025-09-09T05:31:34.013354144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:34.014281 containerd[1581]: time="2025-09-09T05:31:34.014251153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 4.309422941s" Sep 9 05:31:34.014357 containerd[1581]: time="2025-09-09T05:31:34.014287753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 05:31:34.015224 containerd[1581]: time="2025-09-09T05:31:34.015173923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:31:34.016907 containerd[1581]: time="2025-09-09T05:31:34.016875409Z" level=info msg="CreateContainer within sandbox \"3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 05:31:34.508800 containerd[1581]: time="2025-09-09T05:31:34.508626480Z" level=info msg="Container e33c1003a6d72d5238e6b2b9752e8957c8ba035f2c440d380a7ea22311560e0c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:35.245076 kubelet[2782]: E0909 05:31:35.245036 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:35.376359 containerd[1581]: time="2025-09-09T05:31:35.376262694Z" level=info msg="CreateContainer within sandbox \"3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e33c1003a6d72d5238e6b2b9752e8957c8ba035f2c440d380a7ea22311560e0c\"" Sep 9 05:31:35.376967 containerd[1581]: time="2025-09-09T05:31:35.376920965Z" level=info msg="StartContainer for \"e33c1003a6d72d5238e6b2b9752e8957c8ba035f2c440d380a7ea22311560e0c\"" Sep 9 05:31:35.378520 containerd[1581]: time="2025-09-09T05:31:35.378454114Z" level=info msg="connecting to shim e33c1003a6d72d5238e6b2b9752e8957c8ba035f2c440d380a7ea22311560e0c" address="unix:///run/containerd/s/228792618c04fa57f0e5cd9fc0cf7c21d1e17ce473d40b0fecbf6eacf42d07ea" protocol=ttrpc version=3 Sep 9 05:31:35.410916 systemd[1]: Started cri-containerd-e33c1003a6d72d5238e6b2b9752e8957c8ba035f2c440d380a7ea22311560e0c.scope - libcontainer container e33c1003a6d72d5238e6b2b9752e8957c8ba035f2c440d380a7ea22311560e0c. Sep 9 05:31:35.463179 containerd[1581]: time="2025-09-09T05:31:35.462261568Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:35.464981 containerd[1581]: time="2025-09-09T05:31:35.464893167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:31:35.467593 containerd[1581]: time="2025-09-09T05:31:35.467530917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.452325685s" Sep 9 05:31:35.467593 containerd[1581]: time="2025-09-09T05:31:35.467588055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:31:35.469153 containerd[1581]: time="2025-09-09T05:31:35.469104202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 05:31:35.471251 containerd[1581]: time="2025-09-09T05:31:35.471118407Z" level=info msg="CreateContainer within sandbox \"387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:31:35.488951 containerd[1581]: time="2025-09-09T05:31:35.488887719Z" level=info msg="StartContainer for \"e33c1003a6d72d5238e6b2b9752e8957c8ba035f2c440d380a7ea22311560e0c\" returns successfully" Sep 9 05:31:35.502658 containerd[1581]: time="2025-09-09T05:31:35.502192198Z" level=info msg="Container 0b42ea8b663200c0bf3fadffddacac203eb332fe0d3cc36dd00dd735e191d90d: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:35.520573 containerd[1581]: time="2025-09-09T05:31:35.520494484Z" level=info msg="CreateContainer within sandbox \"387f9b8d26de5cf5a60023e1323fc2a08fcd6a3cec29fbe88484bb160113ed58\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0b42ea8b663200c0bf3fadffddacac203eb332fe0d3cc36dd00dd735e191d90d\"" Sep 9 05:31:35.521735 containerd[1581]: time="2025-09-09T05:31:35.521425307Z" level=info msg="StartContainer for \"0b42ea8b663200c0bf3fadffddacac203eb332fe0d3cc36dd00dd735e191d90d\"" Sep 9 05:31:35.523183 containerd[1581]: time="2025-09-09T05:31:35.523140269Z" level=info msg="connecting to shim 0b42ea8b663200c0bf3fadffddacac203eb332fe0d3cc36dd00dd735e191d90d" address="unix:///run/containerd/s/e3b736f8870c4b2feeb7e95752f2aad40337bf0bc912262e81738d7945a466c1" protocol=ttrpc version=3 Sep 9 05:31:35.565856 systemd[1]: Started cri-containerd-0b42ea8b663200c0bf3fadffddacac203eb332fe0d3cc36dd00dd735e191d90d.scope - libcontainer container 0b42ea8b663200c0bf3fadffddacac203eb332fe0d3cc36dd00dd735e191d90d. Sep 9 05:31:35.692324 containerd[1581]: time="2025-09-09T05:31:35.692257800Z" level=info msg="StartContainer for \"0b42ea8b663200c0bf3fadffddacac203eb332fe0d3cc36dd00dd735e191d90d\" returns successfully" Sep 9 05:31:35.754829 kubelet[2782]: I0909 05:31:35.754331 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-687ff77b4f-znvzn" podStartSLOduration=50.491465767 podStartE2EDuration="1m0.75430928s" podCreationTimestamp="2025-09-09 05:30:35 +0000 UTC" firstStartedPulling="2025-09-09 05:31:25.205668503 +0000 UTC m=+63.323880592" lastFinishedPulling="2025-09-09 05:31:35.468512016 +0000 UTC m=+73.586724105" observedRunningTime="2025-09-09 05:31:35.75091232 +0000 UTC m=+73.869124429" watchObservedRunningTime="2025-09-09 05:31:35.75430928 +0000 UTC m=+73.872521359" Sep 9 05:31:36.245387 kubelet[2782]: E0909 05:31:36.245335 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:37.742952 kubelet[2782]: I0909 05:31:37.742905 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:31:38.386352 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:53222.service - OpenSSH per-connection server daemon (10.0.0.1:53222). Sep 9 05:31:38.469435 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 53222 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:38.491020 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:38.495935 systemd-logind[1560]: New session 11 of user core. Sep 9 05:31:38.502712 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:31:38.850337 sshd[5409]: Connection closed by 10.0.0.1 port 53222 Sep 9 05:31:38.850737 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:38.855457 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:53222.service: Deactivated successfully. Sep 9 05:31:38.857618 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:31:38.858419 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:31:38.859601 systemd-logind[1560]: Removed session 11. Sep 9 05:31:41.021611 containerd[1581]: time="2025-09-09T05:31:41.020918066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:41.024339 containerd[1581]: time="2025-09-09T05:31:41.024260962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 05:31:41.026414 containerd[1581]: time="2025-09-09T05:31:41.025858408Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:41.029572 containerd[1581]: time="2025-09-09T05:31:41.029190953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:41.031456 containerd[1581]: time="2025-09-09T05:31:41.030796424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 5.561641406s" Sep 9 05:31:41.031456 containerd[1581]: time="2025-09-09T05:31:41.030829407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 05:31:41.035206 containerd[1581]: time="2025-09-09T05:31:41.035164364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:31:41.036457 containerd[1581]: time="2025-09-09T05:31:41.036391536Z" level=info msg="CreateContainer within sandbox \"ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 05:31:41.083250 containerd[1581]: time="2025-09-09T05:31:41.083161500Z" level=info msg="Container 9b7e0a5bc059039245bb046adb0db248e94fdeeee098145e1f4134f64b37b6f5: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:41.094023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991852265.mount: Deactivated successfully. Sep 9 05:31:41.130234 containerd[1581]: time="2025-09-09T05:31:41.130177896Z" level=info msg="CreateContainer within sandbox \"ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b7e0a5bc059039245bb046adb0db248e94fdeeee098145e1f4134f64b37b6f5\"" Sep 9 05:31:41.131469 containerd[1581]: time="2025-09-09T05:31:41.131427442Z" level=info msg="StartContainer for \"9b7e0a5bc059039245bb046adb0db248e94fdeeee098145e1f4134f64b37b6f5\"" Sep 9 05:31:41.133678 containerd[1581]: time="2025-09-09T05:31:41.133633839Z" level=info msg="connecting to shim 9b7e0a5bc059039245bb046adb0db248e94fdeeee098145e1f4134f64b37b6f5" address="unix:///run/containerd/s/55fc07d27bce5be986fa656577af26aa32b63a291133959f9b84e22dc385e6be" protocol=ttrpc version=3 Sep 9 05:31:41.177171 systemd[1]: Started cri-containerd-9b7e0a5bc059039245bb046adb0db248e94fdeeee098145e1f4134f64b37b6f5.scope - libcontainer container 9b7e0a5bc059039245bb046adb0db248e94fdeeee098145e1f4134f64b37b6f5. Sep 9 05:31:41.375891 containerd[1581]: time="2025-09-09T05:31:41.374228440Z" level=info msg="StartContainer for \"9b7e0a5bc059039245bb046adb0db248e94fdeeee098145e1f4134f64b37b6f5\" returns successfully" Sep 9 05:31:41.587582 containerd[1581]: time="2025-09-09T05:31:41.586942534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:31:41.591587 containerd[1581]: time="2025-09-09T05:31:41.586942804Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:41.591587 containerd[1581]: time="2025-09-09T05:31:41.590282864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 554.96235ms" Sep 9 05:31:41.591587 containerd[1581]: time="2025-09-09T05:31:41.590319825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:31:41.594021 containerd[1581]: time="2025-09-09T05:31:41.593946207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 05:31:41.596693 containerd[1581]: time="2025-09-09T05:31:41.595534916Z" level=info msg="CreateContainer within sandbox \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:31:41.609634 containerd[1581]: time="2025-09-09T05:31:41.609577140Z" level=info msg="Container 9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:41.630915 containerd[1581]: time="2025-09-09T05:31:41.630049865Z" level=info msg="CreateContainer within sandbox \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\"" Sep 9 05:31:41.632180 containerd[1581]: time="2025-09-09T05:31:41.632116795Z" level=info msg="StartContainer for \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\"" Sep 9 05:31:41.635093 containerd[1581]: time="2025-09-09T05:31:41.634764792Z" level=info msg="connecting to shim 9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3" address="unix:///run/containerd/s/86ac22ac6e24c48533f99a666f8cfe78538eba997b4e3343e4c1bc7b1f589769" protocol=ttrpc version=3 Sep 9 05:31:41.675580 kubelet[2782]: I0909 05:31:41.673966 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e83f81ff-403f-4b1c-aef8-3451aa5643d5-calico-apiserver-certs\") pod \"calico-apiserver-687ff77b4f-5lgs7\" (UID: \"e83f81ff-403f-4b1c-aef8-3451aa5643d5\") " pod="calico-apiserver/calico-apiserver-687ff77b4f-5lgs7" Sep 9 05:31:41.674245 systemd[1]: Created slice kubepods-besteffort-pode83f81ff_403f_4b1c_aef8_3451aa5643d5.slice - libcontainer container kubepods-besteffort-pode83f81ff_403f_4b1c_aef8_3451aa5643d5.slice. Sep 9 05:31:41.677945 kubelet[2782]: I0909 05:31:41.677608 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gznsj\" (UniqueName: \"kubernetes.io/projected/e83f81ff-403f-4b1c-aef8-3451aa5643d5-kube-api-access-gznsj\") pod \"calico-apiserver-687ff77b4f-5lgs7\" (UID: \"e83f81ff-403f-4b1c-aef8-3451aa5643d5\") " pod="calico-apiserver/calico-apiserver-687ff77b4f-5lgs7" Sep 9 05:31:41.697801 systemd[1]: Started cri-containerd-9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3.scope - libcontainer container 9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3. Sep 9 05:31:41.808858 containerd[1581]: time="2025-09-09T05:31:41.808650051Z" level=info msg="StartContainer for \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" returns successfully" Sep 9 05:31:41.927960 containerd[1581]: time="2025-09-09T05:31:41.927447354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\" id:\"c8a069f154dc7bc14a1ac2b59eaa4b22bb0e1ab70bbe5dd7837cb4a83d0aef11\" pid:5485 exited_at:{seconds:1757395901 nanos:926978561}" Sep 9 05:31:41.981759 containerd[1581]: time="2025-09-09T05:31:41.981704190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-5lgs7,Uid:e83f81ff-403f-4b1c-aef8-3451aa5643d5,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:31:42.188005 systemd-networkd[1505]: cali42325fc997d: Link UP Sep 9 05:31:42.191997 systemd-networkd[1505]: cali42325fc997d: Gained carrier Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.072 [INFO][5526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0 calico-apiserver-687ff77b4f- calico-apiserver e83f81ff-403f-4b1c-aef8-3451aa5643d5 1190 0 2025-09-09 05:31:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:687ff77b4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-687ff77b4f-5lgs7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali42325fc997d [] [] }} ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.073 [INFO][5526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.135 [INFO][5541] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" HandleID="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Workload="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.136 [INFO][5541] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" HandleID="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Workload="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-687ff77b4f-5lgs7", "timestamp":"2025-09-09 05:31:42.135877066 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.136 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.136 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.136 [INFO][5541] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.142 [INFO][5541] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.147 [INFO][5541] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.154 [INFO][5541] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.160 [INFO][5541] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.163 [INFO][5541] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.163 [INFO][5541] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.165 [INFO][5541] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21 Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.170 [INFO][5541] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.178 [INFO][5541] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.178 [INFO][5541] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" host="localhost" Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.178 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:42.214361 containerd[1581]: 2025-09-09 05:31:42.178 [INFO][5541] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" HandleID="k8s-pod-network.8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Workload="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" Sep 9 05:31:42.216500 containerd[1581]: 2025-09-09 05:31:42.182 [INFO][5526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0", GenerateName:"calico-apiserver-687ff77b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e83f81ff-403f-4b1c-aef8-3451aa5643d5", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687ff77b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-687ff77b4f-5lgs7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42325fc997d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:42.216500 containerd[1581]: 2025-09-09 05:31:42.182 [INFO][5526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" Sep 9 05:31:42.216500 containerd[1581]: 2025-09-09 05:31:42.182 [INFO][5526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42325fc997d ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" Sep 9 05:31:42.216500 containerd[1581]: 2025-09-09 05:31:42.192 [INFO][5526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" Sep 9 05:31:42.216500 containerd[1581]: 2025-09-09 05:31:42.193 [INFO][5526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0", GenerateName:"calico-apiserver-687ff77b4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e83f81ff-403f-4b1c-aef8-3451aa5643d5", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687ff77b4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21", Pod:"calico-apiserver-687ff77b4f-5lgs7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali42325fc997d", MAC:"ea:c4:e6:1e:cd:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:31:42.216500 containerd[1581]: 2025-09-09 05:31:42.207 [INFO][5526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" Namespace="calico-apiserver" Pod="calico-apiserver-687ff77b4f-5lgs7" WorkloadEndpoint="localhost-k8s-calico--apiserver--687ff77b4f--5lgs7-eth0" Sep 9 05:31:42.735357 containerd[1581]: time="2025-09-09T05:31:42.735292740Z" level=info msg="connecting to shim 8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21" address="unix:///run/containerd/s/41de8e20a5cc25752c29b206637bb972f02b333324e25b0f6ba7063d3ea1572d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:31:42.763951 systemd[1]: Started cri-containerd-8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21.scope - libcontainer container 8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21. Sep 9 05:31:42.779236 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:31:42.826590 containerd[1581]: time="2025-09-09T05:31:42.826502227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687ff77b4f-5lgs7,Uid:e83f81ff-403f-4b1c-aef8-3451aa5643d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21\"" Sep 9 05:31:42.829210 containerd[1581]: time="2025-09-09T05:31:42.829158590Z" level=info msg="CreateContainer within sandbox \"8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:31:42.831818 containerd[1581]: time="2025-09-09T05:31:42.831747421Z" level=info msg="StopContainer for \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" with timeout 30 (s)" Sep 9 05:31:42.836397 kubelet[2782]: I0909 05:31:42.835835 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54d4f99bb7-f2pkg" podStartSLOduration=55.79949323 podStartE2EDuration="1m7.83581013s" podCreationTimestamp="2025-09-09 05:30:35 +0000 UTC" firstStartedPulling="2025-09-09 05:31:29.555331103 +0000 UTC m=+67.673543192" lastFinishedPulling="2025-09-09 05:31:41.591648003 +0000 UTC m=+79.709860092" observedRunningTime="2025-09-09 05:31:42.83561229 +0000 UTC m=+80.953824379" watchObservedRunningTime="2025-09-09 05:31:42.83581013 +0000 UTC m=+80.954022219" Sep 9 05:31:42.840831 containerd[1581]: time="2025-09-09T05:31:42.840794744Z" level=info msg="Stop container \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" with signal terminated" Sep 9 05:31:42.858295 systemd[1]: cri-containerd-9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3.scope: Deactivated successfully. Sep 9 05:31:42.862872 containerd[1581]: time="2025-09-09T05:31:42.859846960Z" level=info msg="Container 876cf39dfa6d0ad19b0b955b319343a94c18977b0427902fa8278c109137880f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:42.862872 containerd[1581]: time="2025-09-09T05:31:42.861099091Z" level=info msg="received exit event container_id:\"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" id:\"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" pid:5493 exit_status:1 exited_at:{seconds:1757395902 nanos:860707857}" Sep 9 05:31:42.862872 containerd[1581]: time="2025-09-09T05:31:42.861953504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" id:\"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" pid:5493 exit_status:1 exited_at:{seconds:1757395902 nanos:860707857}" Sep 9 05:31:42.884310 containerd[1581]: time="2025-09-09T05:31:42.884260165Z" level=info msg="CreateContainer within sandbox \"8ff15475ee0c54c0b7c72ec92e24c1efee4b19227db96dc56e4353681b781e21\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"876cf39dfa6d0ad19b0b955b319343a94c18977b0427902fa8278c109137880f\"" Sep 9 05:31:42.885187 containerd[1581]: time="2025-09-09T05:31:42.885150558Z" level=info msg="StartContainer for \"876cf39dfa6d0ad19b0b955b319343a94c18977b0427902fa8278c109137880f\"" Sep 9 05:31:42.886349 containerd[1581]: time="2025-09-09T05:31:42.886326582Z" level=info msg="connecting to shim 876cf39dfa6d0ad19b0b955b319343a94c18977b0427902fa8278c109137880f" address="unix:///run/containerd/s/41de8e20a5cc25752c29b206637bb972f02b333324e25b0f6ba7063d3ea1572d" protocol=ttrpc version=3 Sep 9 05:31:42.921718 systemd[1]: Started cri-containerd-876cf39dfa6d0ad19b0b955b319343a94c18977b0427902fa8278c109137880f.scope - libcontainer container 876cf39dfa6d0ad19b0b955b319343a94c18977b0427902fa8278c109137880f. Sep 9 05:31:43.009207 containerd[1581]: time="2025-09-09T05:31:43.009031177Z" level=info msg="StartContainer for \"876cf39dfa6d0ad19b0b955b319343a94c18977b0427902fa8278c109137880f\" returns successfully" Sep 9 05:31:43.031483 containerd[1581]: time="2025-09-09T05:31:43.031232270Z" level=info msg="StopContainer for \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" returns successfully" Sep 9 05:31:43.036745 containerd[1581]: time="2025-09-09T05:31:43.036656124Z" level=info msg="StopPodSandbox for \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\"" Sep 9 05:31:43.056591 containerd[1581]: time="2025-09-09T05:31:43.055596503Z" level=info msg="Container to stop \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:31:43.072355 systemd[1]: cri-containerd-58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f.scope: Deactivated successfully. Sep 9 05:31:43.075060 containerd[1581]: time="2025-09-09T05:31:43.074976227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" id:\"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" pid:5145 exit_status:137 exited_at:{seconds:1757395903 nanos:74490513}" Sep 9 05:31:43.084520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3-rootfs.mount: Deactivated successfully. Sep 9 05:31:43.121282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f-rootfs.mount: Deactivated successfully. Sep 9 05:31:43.123350 containerd[1581]: time="2025-09-09T05:31:43.122505518Z" level=info msg="shim disconnected" id=58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f namespace=k8s.io Sep 9 05:31:43.123350 containerd[1581]: time="2025-09-09T05:31:43.122634866Z" level=warning msg="cleaning up after shim disconnected" id=58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f namespace=k8s.io Sep 9 05:31:43.134213 containerd[1581]: time="2025-09-09T05:31:43.122647019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:31:43.435041 systemd-networkd[1505]: cali42325fc997d: Gained IPv6LL Sep 9 05:31:43.502952 containerd[1581]: time="2025-09-09T05:31:43.502883006Z" level=info msg="received exit event sandbox_id:\"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" exit_status:137 exited_at:{seconds:1757395903 nanos:74490513}" Sep 9 05:31:43.507812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f-shm.mount: Deactivated successfully. Sep 9 05:31:43.569593 systemd-networkd[1505]: cali160f3464779: Link DOWN Sep 9 05:31:43.569612 systemd-networkd[1505]: cali160f3464779: Lost carrier Sep 9 05:31:43.767639 kubelet[2782]: I0909 05:31:43.767606 2782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:31:43.868865 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:59018.service - OpenSSH per-connection server daemon (10.0.0.1:59018). Sep 9 05:31:44.006516 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 59018 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:44.010243 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:44.016125 systemd-logind[1560]: New session 12 of user core. Sep 9 05:31:44.025703 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:31:44.791489 sshd[5743]: Connection closed by 10.0.0.1 port 59018 Sep 9 05:31:44.797256 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:44.803075 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:59018.service: Deactivated successfully. Sep 9 05:31:44.805769 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:31:44.810090 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:31:44.811925 systemd-logind[1560]: Removed session 12. Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.566 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.567 [INFO][5717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" iface="eth0" netns="/var/run/netns/cni-a569c2c5-6835-13af-de89-a2ea47d4adba" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.567 [INFO][5717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" iface="eth0" netns="/var/run/netns/cni-a569c2c5-6835-13af-de89-a2ea47d4adba" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.575 [INFO][5717] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" after=8.114742ms iface="eth0" netns="/var/run/netns/cni-a569c2c5-6835-13af-de89-a2ea47d4adba" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.575 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.575 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.631 [INFO][5726] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.631 [INFO][5726] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:43.631 [INFO][5726] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:44.855 [INFO][5726] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:44.856 [INFO][5726] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:44.893 [INFO][5726] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:44.903014 containerd[1581]: 2025-09-09 05:31:44.899 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:31:44.908347 systemd[1]: run-netns-cni\x2da569c2c5\x2d6835\x2d13af\x2dde89\x2da2ea47d4adba.mount: Deactivated successfully. Sep 9 05:31:44.912527 containerd[1581]: time="2025-09-09T05:31:44.912476967Z" level=info msg="TearDown network for sandbox \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" successfully" Sep 9 05:31:44.912527 containerd[1581]: time="2025-09-09T05:31:44.912521893Z" level=info msg="StopPodSandbox for \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" returns successfully" Sep 9 05:31:45.001066 kubelet[2782]: I0909 05:31:45.000992 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thblt\" (UniqueName: \"kubernetes.io/projected/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-kube-api-access-thblt\") pod \"b10d24f7-9257-4b62-a1a3-f25bb881f8eb\" (UID: \"b10d24f7-9257-4b62-a1a3-f25bb881f8eb\") " Sep 9 05:31:45.001066 kubelet[2782]: I0909 05:31:45.001079 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-calico-apiserver-certs\") pod \"b10d24f7-9257-4b62-a1a3-f25bb881f8eb\" (UID: \"b10d24f7-9257-4b62-a1a3-f25bb881f8eb\") " Sep 9 05:31:45.007242 kubelet[2782]: I0909 05:31:45.006533 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "b10d24f7-9257-4b62-a1a3-f25bb881f8eb" (UID: "b10d24f7-9257-4b62-a1a3-f25bb881f8eb"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:31:45.009202 kubelet[2782]: I0909 05:31:45.009133 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-687ff77b4f-5lgs7" podStartSLOduration=4.00911118 podStartE2EDuration="4.00911118s" podCreationTimestamp="2025-09-09 05:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:31:44.495750592 +0000 UTC m=+82.613962701" watchObservedRunningTime="2025-09-09 05:31:45.00911118 +0000 UTC m=+83.127323269" Sep 9 05:31:45.011240 kubelet[2782]: I0909 05:31:45.011193 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-kube-api-access-thblt" (OuterVolumeSpecName: "kube-api-access-thblt") pod "b10d24f7-9257-4b62-a1a3-f25bb881f8eb" (UID: "b10d24f7-9257-4b62-a1a3-f25bb881f8eb"). InnerVolumeSpecName "kube-api-access-thblt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:31:45.013758 systemd[1]: var-lib-kubelet-pods-b10d24f7\x2d9257\x2d4b62\x2da1a3\x2df25bb881f8eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthblt.mount: Deactivated successfully. Sep 9 05:31:45.013941 systemd[1]: var-lib-kubelet-pods-b10d24f7\x2d9257\x2d4b62\x2da1a3\x2df25bb881f8eb-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 9 05:31:45.102186 kubelet[2782]: I0909 05:31:45.101987 2782 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Sep 9 05:31:45.102186 kubelet[2782]: I0909 05:31:45.102045 2782 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-thblt\" (UniqueName: \"kubernetes.io/projected/b10d24f7-9257-4b62-a1a3-f25bb881f8eb-kube-api-access-thblt\") on node \"localhost\" DevicePath \"\"" Sep 9 05:31:45.781353 systemd[1]: Removed slice kubepods-besteffort-podb10d24f7_9257_4b62_a1a3_f25bb881f8eb.slice - libcontainer container kubepods-besteffort-podb10d24f7_9257_4b62_a1a3_f25bb881f8eb.slice. Sep 9 05:31:46.245702 kubelet[2782]: E0909 05:31:46.245554 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:47.500563 containerd[1581]: time="2025-09-09T05:31:47.500199879Z" level=info msg="StopContainer for \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" with timeout 30 (s)" Sep 9 05:31:47.502007 containerd[1581]: time="2025-09-09T05:31:47.501959359Z" level=info msg="Stop container \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" with signal terminated" Sep 9 05:31:47.519099 systemd[1]: cri-containerd-fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112.scope: Deactivated successfully. Sep 9 05:31:47.519502 systemd[1]: cri-containerd-fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112.scope: Consumed 1.159s CPU time, 57.6M memory peak. Sep 9 05:31:47.521917 containerd[1581]: time="2025-09-09T05:31:47.521851086Z" level=info msg="received exit event container_id:\"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" id:\"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" pid:5263 exit_status:1 exited_at:{seconds:1757395907 nanos:521468572}" Sep 9 05:31:47.522059 containerd[1581]: time="2025-09-09T05:31:47.522018138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" id:\"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" pid:5263 exit_status:1 exited_at:{seconds:1757395907 nanos:521468572}" Sep 9 05:31:47.558670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112-rootfs.mount: Deactivated successfully. Sep 9 05:31:47.816345 containerd[1581]: time="2025-09-09T05:31:47.816281362Z" level=info msg="StopContainer for \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" returns successfully" Sep 9 05:31:47.817051 containerd[1581]: time="2025-09-09T05:31:47.817004351Z" level=info msg="StopPodSandbox for \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\"" Sep 9 05:31:47.817133 containerd[1581]: time="2025-09-09T05:31:47.817095617Z" level=info msg="Container to stop \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:31:47.828599 systemd[1]: cri-containerd-9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926.scope: Deactivated successfully. Sep 9 05:31:47.831384 containerd[1581]: time="2025-09-09T05:31:47.831349082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" id:\"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" pid:4613 exit_status:137 exited_at:{seconds:1757395907 nanos:830949304}" Sep 9 05:31:47.867586 containerd[1581]: time="2025-09-09T05:31:47.866942568Z" level=info msg="shim disconnected" id=9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926 namespace=k8s.io Sep 9 05:31:47.867918 containerd[1581]: time="2025-09-09T05:31:47.867763004Z" level=warning msg="cleaning up after shim disconnected" id=9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926 namespace=k8s.io Sep 9 05:31:47.867918 containerd[1581]: time="2025-09-09T05:31:47.867781028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:31:47.869519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926-rootfs.mount: Deactivated successfully. Sep 9 05:31:47.915784 containerd[1581]: time="2025-09-09T05:31:47.915701758Z" level=info msg="received exit event sandbox_id:\"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" exit_status:137 exited_at:{seconds:1757395907 nanos:830949304}" Sep 9 05:31:47.935977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926-shm.mount: Deactivated successfully. Sep 9 05:31:48.145350 systemd-networkd[1505]: cali69cb70c762b: Link DOWN Sep 9 05:31:48.145364 systemd-networkd[1505]: cali69cb70c762b: Lost carrier Sep 9 05:31:48.247835 kubelet[2782]: I0909 05:31:48.247771 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10d24f7-9257-4b62-a1a3-f25bb881f8eb" path="/var/lib/kubelet/pods/b10d24f7-9257-4b62-a1a3-f25bb881f8eb/volumes" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.142 [INFO][5837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.142 [INFO][5837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" iface="eth0" netns="/var/run/netns/cni-54d885de-b68b-0760-50ba-5d51261627bc" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.143 [INFO][5837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" iface="eth0" netns="/var/run/netns/cni-54d885de-b68b-0760-50ba-5d51261627bc" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.150 [INFO][5837] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" after=7.991279ms iface="eth0" netns="/var/run/netns/cni-54d885de-b68b-0760-50ba-5d51261627bc" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.150 [INFO][5837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.150 [INFO][5837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.195 [INFO][5849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.195 [INFO][5849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.195 [INFO][5849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.749 [INFO][5849] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.750 [INFO][5849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.751 [INFO][5849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:31:48.762424 containerd[1581]: 2025-09-09 05:31:48.754 [INFO][5837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:31:48.766365 containerd[1581]: time="2025-09-09T05:31:48.765839464Z" level=info msg="TearDown network for sandbox \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" successfully" Sep 9 05:31:48.766365 containerd[1581]: time="2025-09-09T05:31:48.765879081Z" level=info msg="StopPodSandbox for \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" returns successfully" Sep 9 05:31:48.769999 systemd[1]: run-netns-cni\x2d54d885de\x2db68b\x2d0760\x2d50ba\x2d5d51261627bc.mount: Deactivated successfully. Sep 9 05:31:48.882371 kubelet[2782]: I0909 05:31:48.880361 2782 scope.go:117] "RemoveContainer" containerID="fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112" Sep 9 05:31:48.882750 containerd[1581]: time="2025-09-09T05:31:48.882695867Z" level=info msg="RemoveContainer for \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\"" Sep 9 05:31:48.937222 kubelet[2782]: I0909 05:31:48.937140 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x69wc\" (UniqueName: \"kubernetes.io/projected/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-kube-api-access-x69wc\") pod \"7e14dcbb-eeec-4f50-b6c8-5afcd301051c\" (UID: \"7e14dcbb-eeec-4f50-b6c8-5afcd301051c\") " Sep 9 05:31:48.937222 kubelet[2782]: I0909 05:31:48.937209 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-calico-apiserver-certs\") pod \"7e14dcbb-eeec-4f50-b6c8-5afcd301051c\" (UID: \"7e14dcbb-eeec-4f50-b6c8-5afcd301051c\") " Sep 9 05:31:49.144909 kubelet[2782]: I0909 05:31:49.144773 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-kube-api-access-x69wc" (OuterVolumeSpecName: "kube-api-access-x69wc") pod "7e14dcbb-eeec-4f50-b6c8-5afcd301051c" (UID: "7e14dcbb-eeec-4f50-b6c8-5afcd301051c"). InnerVolumeSpecName "kube-api-access-x69wc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:31:49.145226 kubelet[2782]: I0909 05:31:49.145158 2782 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7e14dcbb-eeec-4f50-b6c8-5afcd301051c" (UID: "7e14dcbb-eeec-4f50-b6c8-5afcd301051c"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:31:49.146723 systemd[1]: var-lib-kubelet-pods-7e14dcbb\x2deeec\x2d4f50\x2db6c8\x2d5afcd301051c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx69wc.mount: Deactivated successfully. Sep 9 05:31:49.146847 systemd[1]: var-lib-kubelet-pods-7e14dcbb\x2deeec\x2d4f50\x2db6c8\x2d5afcd301051c-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 9 05:31:49.239565 kubelet[2782]: I0909 05:31:49.239484 2782 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x69wc\" (UniqueName: \"kubernetes.io/projected/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-kube-api-access-x69wc\") on node \"localhost\" DevicePath \"\"" Sep 9 05:31:49.239565 kubelet[2782]: I0909 05:31:49.239535 2782 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e14dcbb-eeec-4f50-b6c8-5afcd301051c-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Sep 9 05:31:49.670855 containerd[1581]: time="2025-09-09T05:31:49.670477033Z" level=info msg="RemoveContainer for \"fd51532d1a62b69eab5b66f7625826618c85d2ca20db63b6a6d237b1cb27e112\" returns successfully" Sep 9 05:31:49.680360 containerd[1581]: time="2025-09-09T05:31:49.679079540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 05:31:49.680617 containerd[1581]: time="2025-09-09T05:31:49.680576826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:49.685018 containerd[1581]: time="2025-09-09T05:31:49.684928881Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:49.688566 containerd[1581]: time="2025-09-09T05:31:49.687254336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:49.688566 containerd[1581]: time="2025-09-09T05:31:49.688343017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 8.094342645s" Sep 9 05:31:49.688566 containerd[1581]: time="2025-09-09T05:31:49.688385318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 05:31:49.692792 containerd[1581]: time="2025-09-09T05:31:49.692725151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 05:31:49.705021 containerd[1581]: time="2025-09-09T05:31:49.704896991Z" level=info msg="CreateContainer within sandbox \"da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 05:31:49.718787 containerd[1581]: time="2025-09-09T05:31:49.718588230Z" level=info msg="Container 68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:49.742628 containerd[1581]: time="2025-09-09T05:31:49.739867217Z" level=info msg="CreateContainer within sandbox \"da874fbf2ed5d59a69a97b08984f61c9336e683198b94b1f278e9859159b7ece\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50\"" Sep 9 05:31:49.742628 containerd[1581]: time="2025-09-09T05:31:49.742205516Z" level=info msg="StartContainer for \"68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50\"" Sep 9 05:31:49.744349 containerd[1581]: time="2025-09-09T05:31:49.744308794Z" level=info msg="connecting to shim 68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50" address="unix:///run/containerd/s/663bf43d9b7a22be6f4e77a468f7b0c8549d833a8a931a8225bd7e1b1a30a200" protocol=ttrpc version=3 Sep 9 05:31:49.787754 systemd[1]: Started cri-containerd-68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50.scope - libcontainer container 68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50. Sep 9 05:31:49.808689 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:59028.service - OpenSSH per-connection server daemon (10.0.0.1:59028). Sep 9 05:31:49.935912 systemd[1]: Removed slice kubepods-besteffort-pod7e14dcbb_eeec_4f50_b6c8_5afcd301051c.slice - libcontainer container kubepods-besteffort-pod7e14dcbb_eeec_4f50_b6c8_5afcd301051c.slice. Sep 9 05:31:49.936055 systemd[1]: kubepods-besteffort-pod7e14dcbb_eeec_4f50_b6c8_5afcd301051c.slice: Consumed 1.196s CPU time, 57.8M memory peak. Sep 9 05:31:49.939130 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 59028 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:49.941811 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:49.956806 systemd-logind[1560]: New session 13 of user core. Sep 9 05:31:49.968341 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:31:50.241592 containerd[1581]: time="2025-09-09T05:31:50.241452986Z" level=info msg="StartContainer for \"68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50\" returns successfully" Sep 9 05:31:50.246931 kubelet[2782]: I0909 05:31:50.246899 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e14dcbb-eeec-4f50-b6c8-5afcd301051c" path="/var/lib/kubelet/pods/7e14dcbb-eeec-4f50-b6c8-5afcd301051c/volumes" Sep 9 05:31:50.305432 sshd[5916]: Connection closed by 10.0.0.1 port 59028 Sep 9 05:31:50.305862 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:50.319188 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:59028.service: Deactivated successfully. Sep 9 05:31:50.325206 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:31:50.326449 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:31:50.333779 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:55984.service - OpenSSH per-connection server daemon (10.0.0.1:55984). Sep 9 05:31:50.334431 systemd-logind[1560]: Removed session 13. Sep 9 05:31:50.404869 sshd[5930]: Accepted publickey for core from 10.0.0.1 port 55984 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:50.407117 sshd-session[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:50.412829 systemd-logind[1560]: New session 14 of user core. Sep 9 05:31:50.419718 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:31:50.644840 sshd[5936]: Connection closed by 10.0.0.1 port 55984 Sep 9 05:31:50.644368 sshd-session[5930]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:50.659508 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:55984.service: Deactivated successfully. Sep 9 05:31:50.662850 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:31:50.665487 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:31:50.670165 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:56000.service - OpenSSH per-connection server daemon (10.0.0.1:56000). Sep 9 05:31:50.671033 systemd-logind[1560]: Removed session 14. Sep 9 05:31:50.733194 sshd[5947]: Accepted publickey for core from 10.0.0.1 port 56000 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:50.739030 sshd-session[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:50.744497 systemd-logind[1560]: New session 15 of user core. Sep 9 05:31:50.754871 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:31:50.984898 sshd[5950]: Connection closed by 10.0.0.1 port 56000 Sep 9 05:31:50.987300 sshd-session[5947]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:50.992259 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:56000.service: Deactivated successfully. Sep 9 05:31:50.995317 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:31:50.999463 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:31:51.003421 systemd-logind[1560]: Removed session 15. Sep 9 05:31:51.003792 containerd[1581]: time="2025-09-09T05:31:51.003519457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50\" id:\"3861c7a978d2b4cf2fd809d2a3131fed203d7e66611328e6602488ca4b299d3f\" pid:5972 exited_at:{seconds:1757395911 nanos:1527314}" Sep 9 05:31:51.440104 kubelet[2782]: I0909 05:31:51.439625 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b6b778d54-mgs42" podStartSLOduration=53.412274551 podStartE2EDuration="1m13.439587633s" podCreationTimestamp="2025-09-09 05:30:38 +0000 UTC" firstStartedPulling="2025-09-09 05:31:29.66459484 +0000 UTC m=+67.782806929" lastFinishedPulling="2025-09-09 05:31:49.691907922 +0000 UTC m=+87.810120011" observedRunningTime="2025-09-09 05:31:51.439482521 +0000 UTC m=+89.557694610" watchObservedRunningTime="2025-09-09 05:31:51.439587633 +0000 UTC m=+89.557799722" Sep 9 05:31:53.244830 kubelet[2782]: E0909 05:31:53.244763 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:31:55.999678 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:56002.service - OpenSSH per-connection server daemon (10.0.0.1:56002). Sep 9 05:31:56.078266 sshd[5993]: Accepted publickey for core from 10.0.0.1 port 56002 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:31:56.080443 sshd-session[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:31:56.085785 systemd-logind[1560]: New session 16 of user core. Sep 9 05:31:56.096795 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:31:56.359686 sshd[5996]: Connection closed by 10.0.0.1 port 56002 Sep 9 05:31:56.360209 sshd-session[5993]: pam_unix(sshd:session): session closed for user core Sep 9 05:31:56.365729 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:56002.service: Deactivated successfully. Sep 9 05:31:56.368628 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:31:56.370884 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:31:56.373274 systemd-logind[1560]: Removed session 16. Sep 9 05:31:57.274755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947647888.mount: Deactivated successfully. Sep 9 05:31:59.409490 containerd[1581]: time="2025-09-09T05:31:59.409383073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:59.410609 containerd[1581]: time="2025-09-09T05:31:59.410581627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 05:31:59.412362 containerd[1581]: time="2025-09-09T05:31:59.412326868Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:59.415569 containerd[1581]: time="2025-09-09T05:31:59.415497999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:31:59.416202 containerd[1581]: time="2025-09-09T05:31:59.416173242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 9.723411401s" Sep 9 05:31:59.416261 containerd[1581]: time="2025-09-09T05:31:59.416207297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 05:31:59.417560 containerd[1581]: time="2025-09-09T05:31:59.417486857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 05:31:59.420138 containerd[1581]: time="2025-09-09T05:31:59.420047198Z" level=info msg="CreateContainer within sandbox \"542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 05:31:59.431635 containerd[1581]: time="2025-09-09T05:31:59.431562694Z" level=info msg="Container 054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:31:59.446591 containerd[1581]: time="2025-09-09T05:31:59.446376995Z" level=info msg="CreateContainer within sandbox \"542c5f2f1687a0cda7d34ba05e37917876c52c43dc78cd052a31cc249378d854\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e\"" Sep 9 05:31:59.448571 containerd[1581]: time="2025-09-09T05:31:59.447618051Z" level=info msg="StartContainer for \"054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e\"" Sep 9 05:31:59.449397 containerd[1581]: time="2025-09-09T05:31:59.449326102Z" level=info msg="connecting to shim 054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e" address="unix:///run/containerd/s/d69745116b58b8d5e1662daa46a128ddbff9a17643d45571d444676b5c37a8fb" protocol=ttrpc version=3 Sep 9 05:31:59.484010 systemd[1]: Started cri-containerd-054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e.scope - libcontainer container 054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e. Sep 9 05:31:59.569698 containerd[1581]: time="2025-09-09T05:31:59.569628043Z" level=info msg="StartContainer for \"054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e\" returns successfully" Sep 9 05:32:00.087508 containerd[1581]: time="2025-09-09T05:32:00.087449398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e\" id:\"cf6c715cd6efc16b06dbef7a6298568665c3d87a77ae6e62b2d1ce178cb9ee07\" pid:6074 exit_status:1 exited_at:{seconds:1757395920 nanos:86924654}" Sep 9 05:32:01.117138 containerd[1581]: time="2025-09-09T05:32:01.116950662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e\" id:\"6719cc5ff00e8a024784cc12bf1829ced8ebb668bd118c8674a86bb58439c8e7\" pid:6099 exited_at:{seconds:1757395921 nanos:116601675}" Sep 9 05:32:01.378492 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). Sep 9 05:32:01.534245 kubelet[2782]: I0909 05:32:01.534159 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-7rchx" podStartSLOduration=54.884754038 podStartE2EDuration="1m24.534134752s" podCreationTimestamp="2025-09-09 05:30:37 +0000 UTC" firstStartedPulling="2025-09-09 05:31:29.76793296 +0000 UTC m=+67.886145050" lastFinishedPulling="2025-09-09 05:31:59.417313685 +0000 UTC m=+97.535525764" observedRunningTime="2025-09-09 05:32:00.205274477 +0000 UTC m=+98.323486566" watchObservedRunningTime="2025-09-09 05:32:01.534134752 +0000 UTC m=+99.652346841" Sep 9 05:32:01.847212 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:01.849681 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:01.855954 systemd-logind[1560]: New session 17 of user core. Sep 9 05:32:01.864759 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:32:02.997978 sshd[6115]: Connection closed by 10.0.0.1 port 42826 Sep 9 05:32:02.996959 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:03.003361 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:42826.service: Deactivated successfully. Sep 9 05:32:03.006622 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:32:03.007757 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:32:03.009447 systemd-logind[1560]: Removed session 17. Sep 9 05:32:07.022572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094670647.mount: Deactivated successfully. Sep 9 05:32:08.010520 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:42842.service - OpenSSH per-connection server daemon (10.0.0.1:42842). Sep 9 05:32:08.083271 sshd[6135]: Accepted publickey for core from 10.0.0.1 port 42842 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:08.091692 sshd-session[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:08.100383 systemd-logind[1560]: New session 18 of user core. Sep 9 05:32:08.103708 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:32:08.301581 containerd[1581]: time="2025-09-09T05:32:08.301403058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 05:32:08.304529 containerd[1581]: time="2025-09-09T05:32:08.304454337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:32:08.318308 containerd[1581]: time="2025-09-09T05:32:08.318233476Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:32:08.320234 containerd[1581]: time="2025-09-09T05:32:08.320153844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:32:08.322130 containerd[1581]: time="2025-09-09T05:32:08.322073912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 8.904535575s" Sep 9 05:32:08.322236 containerd[1581]: time="2025-09-09T05:32:08.322124689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 05:32:08.326928 containerd[1581]: time="2025-09-09T05:32:08.326859413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 05:32:08.334603 containerd[1581]: time="2025-09-09T05:32:08.333324441Z" level=info msg="CreateContainer within sandbox \"3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 05:32:08.394049 containerd[1581]: time="2025-09-09T05:32:08.393965291Z" level=info msg="Container e405f725d5eeac0c1726a247b1991abaf9ee3061b6b2eaaf65c70e9666ffa8c9: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:32:08.409565 containerd[1581]: time="2025-09-09T05:32:08.409459907Z" level=info msg="CreateContainer within sandbox \"3bd314ac9ab4c9e682b938a2a66335f9788629bd7c1df1711ee80252294d56c0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e405f725d5eeac0c1726a247b1991abaf9ee3061b6b2eaaf65c70e9666ffa8c9\"" Sep 9 05:32:08.411529 containerd[1581]: time="2025-09-09T05:32:08.411490175Z" level=info msg="StartContainer for \"e405f725d5eeac0c1726a247b1991abaf9ee3061b6b2eaaf65c70e9666ffa8c9\"" Sep 9 05:32:08.414108 containerd[1581]: time="2025-09-09T05:32:08.414067148Z" level=info msg="connecting to shim e405f725d5eeac0c1726a247b1991abaf9ee3061b6b2eaaf65c70e9666ffa8c9" address="unix:///run/containerd/s/228792618c04fa57f0e5cd9fc0cf7c21d1e17ce473d40b0fecbf6eacf42d07ea" protocol=ttrpc version=3 Sep 9 05:32:08.446151 systemd[1]: Started cri-containerd-e405f725d5eeac0c1726a247b1991abaf9ee3061b6b2eaaf65c70e9666ffa8c9.scope - libcontainer container e405f725d5eeac0c1726a247b1991abaf9ee3061b6b2eaaf65c70e9666ffa8c9. Sep 9 05:32:08.733710 sshd[6138]: Connection closed by 10.0.0.1 port 42842 Sep 9 05:32:08.734045 sshd-session[6135]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:08.741772 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:42842.service: Deactivated successfully. Sep 9 05:32:08.747669 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:32:08.748472 containerd[1581]: time="2025-09-09T05:32:08.748400366Z" level=info msg="StartContainer for \"e405f725d5eeac0c1726a247b1991abaf9ee3061b6b2eaaf65c70e9666ffa8c9\" returns successfully" Sep 9 05:32:08.749164 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:32:08.752479 systemd-logind[1560]: Removed session 18. Sep 9 05:32:11.794950 containerd[1581]: time="2025-09-09T05:32:11.794871696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\" id:\"a54c9963707db1e6a2e6446f8ee86c54a11c30370b3da4037e8dea7dd88c6b79\" pid:6205 exited_at:{seconds:1757395931 nanos:768830540}" Sep 9 05:32:13.065465 containerd[1581]: time="2025-09-09T05:32:13.065372249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:32:13.076003 containerd[1581]: time="2025-09-09T05:32:13.075932377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 05:32:13.092092 containerd[1581]: time="2025-09-09T05:32:13.092007507Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:32:13.097032 containerd[1581]: time="2025-09-09T05:32:13.096963693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:32:13.098122 containerd[1581]: time="2025-09-09T05:32:13.098075425Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.771154926s" Sep 9 05:32:13.098122 containerd[1581]: time="2025-09-09T05:32:13.098118356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 05:32:13.102338 containerd[1581]: time="2025-09-09T05:32:13.102255770Z" level=info msg="CreateContainer within sandbox \"ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 05:32:13.145427 containerd[1581]: time="2025-09-09T05:32:13.145247475Z" level=info msg="Container 7259cac1dd63613abc7830844c4304c2a37b21bf6a79ea7701caba8db51ce809: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:32:13.496629 containerd[1581]: time="2025-09-09T05:32:13.496413827Z" level=info msg="CreateContainer within sandbox \"ae2165a07a569c25d3358b1ba53be558cc113ccc5de866e2b31cb2ee4214d464\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7259cac1dd63613abc7830844c4304c2a37b21bf6a79ea7701caba8db51ce809\"" Sep 9 05:32:13.497317 containerd[1581]: time="2025-09-09T05:32:13.497263328Z" level=info msg="StartContainer for \"7259cac1dd63613abc7830844c4304c2a37b21bf6a79ea7701caba8db51ce809\"" Sep 9 05:32:13.499194 containerd[1581]: time="2025-09-09T05:32:13.499143736Z" level=info msg="connecting to shim 7259cac1dd63613abc7830844c4304c2a37b21bf6a79ea7701caba8db51ce809" address="unix:///run/containerd/s/55fc07d27bce5be986fa656577af26aa32b63a291133959f9b84e22dc385e6be" protocol=ttrpc version=3 Sep 9 05:32:13.529798 systemd[1]: Started cri-containerd-7259cac1dd63613abc7830844c4304c2a37b21bf6a79ea7701caba8db51ce809.scope - libcontainer container 7259cac1dd63613abc7830844c4304c2a37b21bf6a79ea7701caba8db51ce809. Sep 9 05:32:13.753710 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:53088.service - OpenSSH per-connection server daemon (10.0.0.1:53088). Sep 9 05:32:13.761696 containerd[1581]: time="2025-09-09T05:32:13.761609382Z" level=info msg="StartContainer for \"7259cac1dd63613abc7830844c4304c2a37b21bf6a79ea7701caba8db51ce809\" returns successfully" Sep 9 05:32:13.836257 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 53088 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:13.838637 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:13.844872 systemd-logind[1560]: New session 19 of user core. Sep 9 05:32:13.860729 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:32:14.245678 kubelet[2782]: E0909 05:32:14.245536 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:32:14.394068 kubelet[2782]: I0909 05:32:14.393917 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nh7hc" podStartSLOduration=52.39997906 podStartE2EDuration="1m36.393897794s" podCreationTimestamp="2025-09-09 05:30:38 +0000 UTC" firstStartedPulling="2025-09-09 05:31:29.10538332 +0000 UTC m=+67.223595409" lastFinishedPulling="2025-09-09 05:32:13.099302054 +0000 UTC m=+111.217514143" observedRunningTime="2025-09-09 05:32:14.393760182 +0000 UTC m=+112.511972291" watchObservedRunningTime="2025-09-09 05:32:14.393897794 +0000 UTC m=+112.512109883" Sep 9 05:32:14.394553 kubelet[2782]: I0909 05:32:14.394497 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b87c89b59-7zbmk" podStartSLOduration=9.826416798 podStartE2EDuration="1m2.394487278s" podCreationTimestamp="2025-09-09 05:31:12 +0000 UTC" firstStartedPulling="2025-09-09 05:31:15.757042808 +0000 UTC m=+53.875254897" lastFinishedPulling="2025-09-09 05:32:08.325113288 +0000 UTC m=+106.443325377" observedRunningTime="2025-09-09 05:32:09.193006108 +0000 UTC m=+107.311218188" watchObservedRunningTime="2025-09-09 05:32:14.394487278 +0000 UTC m=+112.512699367" Sep 9 05:32:14.403951 kubelet[2782]: I0909 05:32:14.403875 2782 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 05:32:14.404117 kubelet[2782]: I0909 05:32:14.403966 2782 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 05:32:14.405779 sshd[6260]: Connection closed by 10.0.0.1 port 53088 Sep 9 05:32:14.406283 sshd-session[6254]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:14.410612 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:53088.service: Deactivated successfully. Sep 9 05:32:14.412598 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:32:14.413278 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:32:14.414749 systemd-logind[1560]: Removed session 19. Sep 9 05:32:19.419730 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:53102.service - OpenSSH per-connection server daemon (10.0.0.1:53102). Sep 9 05:32:19.503381 sshd[6275]: Accepted publickey for core from 10.0.0.1 port 53102 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:19.505714 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:19.511010 systemd-logind[1560]: New session 20 of user core. Sep 9 05:32:19.518791 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:32:19.742641 sshd[6278]: Connection closed by 10.0.0.1 port 53102 Sep 9 05:32:19.742939 sshd-session[6275]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:19.748184 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:53102.service: Deactivated successfully. Sep 9 05:32:19.750522 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:32:19.751530 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:32:19.753398 systemd-logind[1560]: Removed session 20. Sep 9 05:32:21.052392 containerd[1581]: time="2025-09-09T05:32:21.052338745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50\" id:\"3f8e019ae12267e95fd46ef3fe3196e17b6765c54c51ce0ce0d1090e1d497134\" pid:6302 exited_at:{seconds:1757395941 nanos:51782656}" Sep 9 05:32:22.238285 kubelet[2782]: I0909 05:32:22.238230 2782 scope.go:117] "RemoveContainer" containerID="9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3" Sep 9 05:32:22.240794 containerd[1581]: time="2025-09-09T05:32:22.240747728Z" level=info msg="RemoveContainer for \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\"" Sep 9 05:32:22.319047 containerd[1581]: time="2025-09-09T05:32:22.318872782Z" level=info msg="RemoveContainer for \"9d839509d5639b15fd07f14797096f2f54b07fcad12c4747a6b17ef1a76914c3\" returns successfully" Sep 9 05:32:22.320641 containerd[1581]: time="2025-09-09T05:32:22.320611313Z" level=info msg="StopPodSandbox for \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\"" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.479 [WARNING][6325] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.481 [INFO][6325] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.481 [INFO][6325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" iface="eth0" netns="" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.481 [INFO][6325] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.481 [INFO][6325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.545 [INFO][6334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.546 [INFO][6334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.546 [INFO][6334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.556 [WARNING][6334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.556 [INFO][6334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.560 [INFO][6334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:32:22.569129 containerd[1581]: 2025-09-09 05:32:22.565 [INFO][6325] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.579999 containerd[1581]: time="2025-09-09T05:32:22.579891468Z" level=info msg="TearDown network for sandbox \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" successfully" Sep 9 05:32:22.579999 containerd[1581]: time="2025-09-09T05:32:22.579968886Z" level=info msg="StopPodSandbox for \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" returns successfully" Sep 9 05:32:22.594276 containerd[1581]: time="2025-09-09T05:32:22.594128542Z" level=info msg="RemovePodSandbox for \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\"" Sep 9 05:32:22.605255 containerd[1581]: time="2025-09-09T05:32:22.605152135Z" level=info msg="Forcibly stopping sandbox \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\"" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.729 [WARNING][6350] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.729 [INFO][6350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.729 [INFO][6350] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" iface="eth0" netns="" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.729 [INFO][6350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.729 [INFO][6350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.756 [INFO][6359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.757 [INFO][6359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.757 [INFO][6359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.792 [WARNING][6359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.792 [INFO][6359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" HandleID="k8s-pod-network.58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--f2pkg-eth0" Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.795 [INFO][6359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:32:22.805219 containerd[1581]: 2025-09-09 05:32:22.799 [INFO][6350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f" Sep 9 05:32:22.806014 containerd[1581]: time="2025-09-09T05:32:22.805286550Z" level=info msg="TearDown network for sandbox \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" successfully" Sep 9 05:32:22.814416 containerd[1581]: time="2025-09-09T05:32:22.814325975Z" level=info msg="Ensure that sandbox 58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f in task-service has been cleanup successfully" Sep 9 05:32:23.225224 containerd[1581]: time="2025-09-09T05:32:23.225138494Z" level=info msg="RemovePodSandbox \"58ed941a7c4857c13c8f36fe018d205fe01b3dd01a221d9c2e695119b90aed8f\" returns successfully" Sep 9 05:32:23.227775 containerd[1581]: time="2025-09-09T05:32:23.227534206Z" level=info msg="StopPodSandbox for \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\"" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.286 [WARNING][6377] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.286 [INFO][6377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.286 [INFO][6377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" iface="eth0" netns="" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.286 [INFO][6377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.286 [INFO][6377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.346 [INFO][6385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.347 [INFO][6385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.347 [INFO][6385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.420 [WARNING][6385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.420 [INFO][6385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.469 [INFO][6385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:32:23.475115 containerd[1581]: 2025-09-09 05:32:23.472 [INFO][6377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.476009 containerd[1581]: time="2025-09-09T05:32:23.475237494Z" level=info msg="TearDown network for sandbox \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" successfully" Sep 9 05:32:23.476009 containerd[1581]: time="2025-09-09T05:32:23.475271559Z" level=info msg="StopPodSandbox for \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" returns successfully" Sep 9 05:32:23.476009 containerd[1581]: time="2025-09-09T05:32:23.475901418Z" level=info msg="RemovePodSandbox for \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\"" Sep 9 05:32:23.476009 containerd[1581]: time="2025-09-09T05:32:23.475946194Z" level=info msg="Forcibly stopping sandbox \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\"" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.514 [WARNING][6404] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.514 [INFO][6404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.514 [INFO][6404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" iface="eth0" netns="" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.514 [INFO][6404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.514 [INFO][6404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.537 [INFO][6413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.537 [INFO][6413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.537 [INFO][6413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.566 [WARNING][6413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.566 [INFO][6413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" HandleID="k8s-pod-network.9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Workload="localhost-k8s-calico--apiserver--54d4f99bb7--lgdcj-eth0" Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.591 [INFO][6413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:32:23.596578 containerd[1581]: 2025-09-09 05:32:23.593 [INFO][6404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926" Sep 9 05:32:23.647286 containerd[1581]: time="2025-09-09T05:32:23.596646207Z" level=info msg="TearDown network for sandbox \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" successfully" Sep 9 05:32:23.649175 containerd[1581]: time="2025-09-09T05:32:23.649119601Z" level=info msg="Ensure that sandbox 9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926 in task-service has been cleanup successfully" Sep 9 05:32:23.678941 containerd[1581]: time="2025-09-09T05:32:23.678836775Z" level=info msg="RemovePodSandbox \"9c8dc9d87a9cf6ad581ed0f00e3264e341f5ed685496506e0d52065cf63b3926\" returns successfully" Sep 9 05:32:24.760071 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:48378.service - OpenSSH per-connection server daemon (10.0.0.1:48378). Sep 9 05:32:24.844217 sshd[6422]: Accepted publickey for core from 10.0.0.1 port 48378 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:24.846771 sshd-session[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:24.854354 systemd-logind[1560]: New session 21 of user core. Sep 9 05:32:24.863732 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:32:25.080332 sshd[6426]: Connection closed by 10.0.0.1 port 48378 Sep 9 05:32:25.080742 sshd-session[6422]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:25.092296 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:48378.service: Deactivated successfully. Sep 9 05:32:25.094696 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:32:25.095575 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:32:25.099008 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:48384.service - OpenSSH per-connection server daemon (10.0.0.1:48384). Sep 9 05:32:25.100172 systemd-logind[1560]: Removed session 21. Sep 9 05:32:25.167131 sshd[6440]: Accepted publickey for core from 10.0.0.1 port 48384 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:25.169342 sshd-session[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:25.174646 systemd-logind[1560]: New session 22 of user core. Sep 9 05:32:25.184908 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:32:25.560903 sshd[6443]: Connection closed by 10.0.0.1 port 48384 Sep 9 05:32:25.561579 sshd-session[6440]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:25.573391 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:48384.service: Deactivated successfully. Sep 9 05:32:25.576129 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:32:25.577274 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:32:25.581646 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:48392.service - OpenSSH per-connection server daemon (10.0.0.1:48392). Sep 9 05:32:25.583283 systemd-logind[1560]: Removed session 22. Sep 9 05:32:25.663946 sshd[6457]: Accepted publickey for core from 10.0.0.1 port 48392 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:25.666478 sshd-session[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:25.673831 systemd-logind[1560]: New session 23 of user core. Sep 9 05:32:25.689835 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:32:26.576128 sshd[6460]: Connection closed by 10.0.0.1 port 48392 Sep 9 05:32:26.576782 sshd-session[6457]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:26.588528 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:48392.service: Deactivated successfully. Sep 9 05:32:26.591226 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:32:26.592691 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:32:26.596479 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:48396.service - OpenSSH per-connection server daemon (10.0.0.1:48396). Sep 9 05:32:26.598327 systemd-logind[1560]: Removed session 23. Sep 9 05:32:26.682308 sshd[6489]: Accepted publickey for core from 10.0.0.1 port 48396 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:26.684700 sshd-session[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:26.692783 systemd-logind[1560]: New session 24 of user core. Sep 9 05:32:26.709948 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:32:27.126249 sshd[6492]: Connection closed by 10.0.0.1 port 48396 Sep 9 05:32:27.128766 sshd-session[6489]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:27.138923 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:48396.service: Deactivated successfully. Sep 9 05:32:27.141806 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:32:27.142852 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:32:27.149351 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:48412.service - OpenSSH per-connection server daemon (10.0.0.1:48412). Sep 9 05:32:27.150490 systemd-logind[1560]: Removed session 24. Sep 9 05:32:27.218713 sshd[6504]: Accepted publickey for core from 10.0.0.1 port 48412 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:27.221005 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:27.227274 systemd-logind[1560]: New session 25 of user core. Sep 9 05:32:27.236899 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:32:27.380047 sshd[6507]: Connection closed by 10.0.0.1 port 48412 Sep 9 05:32:27.380607 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:27.387574 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:48412.service: Deactivated successfully. Sep 9 05:32:27.390391 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:32:27.391616 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:32:27.393996 systemd-logind[1560]: Removed session 25. Sep 9 05:32:31.132010 containerd[1581]: time="2025-09-09T05:32:31.131938220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e\" id:\"780706ff688804d4da0584177c4f43ba6f2da772750c53ac5414eda1e3a5e0e7\" pid:6535 exited_at:{seconds:1757395951 nanos:131435434}" Sep 9 05:32:32.248254 kubelet[2782]: E0909 05:32:32.248183 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:32:32.399769 systemd[1]: Started sshd@26-10.0.0.34:22-10.0.0.1:52430.service - OpenSSH per-connection server daemon (10.0.0.1:52430). Sep 9 05:32:32.466462 sshd[6548]: Accepted publickey for core from 10.0.0.1 port 52430 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:32.468583 sshd-session[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:32.473866 systemd-logind[1560]: New session 26 of user core. Sep 9 05:32:32.483715 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 05:32:32.612097 sshd[6551]: Connection closed by 10.0.0.1 port 52430 Sep 9 05:32:32.612603 sshd-session[6548]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:32.617453 systemd[1]: sshd@26-10.0.0.34:22-10.0.0.1:52430.service: Deactivated successfully. Sep 9 05:32:32.620140 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 05:32:32.621415 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Sep 9 05:32:32.623663 systemd-logind[1560]: Removed session 26. Sep 9 05:32:37.634492 systemd[1]: Started sshd@27-10.0.0.34:22-10.0.0.1:52444.service - OpenSSH per-connection server daemon (10.0.0.1:52444). Sep 9 05:32:37.762988 sshd[6573]: Accepted publickey for core from 10.0.0.1 port 52444 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:37.766170 sshd-session[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:37.776219 systemd-logind[1560]: New session 27 of user core. Sep 9 05:32:37.785217 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 05:32:38.673202 sshd[6576]: Connection closed by 10.0.0.1 port 52444 Sep 9 05:32:38.673622 sshd-session[6573]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:38.682827 systemd[1]: sshd@27-10.0.0.34:22-10.0.0.1:52444.service: Deactivated successfully. Sep 9 05:32:38.685301 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 05:32:38.686287 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Sep 9 05:32:38.688158 systemd-logind[1560]: Removed session 27. Sep 9 05:32:41.315989 containerd[1581]: time="2025-09-09T05:32:41.315902333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"054d5b224c7bfc75354cd80c8113dbf467500fc5acab33741ca6da6e0dcf059e\" id:\"b10f7c366fdb660fbc16ef3af91f2297e42e759aca7c28a704e648ec73593012\" pid:6604 exited_at:{seconds:1757395961 nanos:314839314}" Sep 9 05:32:41.763962 containerd[1581]: time="2025-09-09T05:32:41.763873174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af5e12e302e7305ae01321feb843e64eb2b311ce1b3b5e2b6b3e4a860db4132\" id:\"ad52242950d2a21e962c3485d044d875942b29e576208fe036df970d0b91535b\" pid:6628 exited_at:{seconds:1757395961 nanos:763420695}" Sep 9 05:32:43.690099 systemd[1]: Started sshd@28-10.0.0.34:22-10.0.0.1:60524.service - OpenSSH per-connection server daemon (10.0.0.1:60524). Sep 9 05:32:43.771766 sshd[6642]: Accepted publickey for core from 10.0.0.1 port 60524 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:43.773694 sshd-session[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:43.782429 systemd-logind[1560]: New session 28 of user core. Sep 9 05:32:43.791180 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 05:32:44.111312 sshd[6646]: Connection closed by 10.0.0.1 port 60524 Sep 9 05:32:44.112152 sshd-session[6642]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:44.118343 systemd[1]: sshd@28-10.0.0.34:22-10.0.0.1:60524.service: Deactivated successfully. Sep 9 05:32:44.122158 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 05:32:44.124422 systemd-logind[1560]: Session 28 logged out. Waiting for processes to exit. Sep 9 05:32:44.127431 systemd-logind[1560]: Removed session 28. Sep 9 05:32:47.245470 kubelet[2782]: E0909 05:32:47.245397 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:32:49.132143 systemd[1]: Started sshd@29-10.0.0.34:22-10.0.0.1:60528.service - OpenSSH per-connection server daemon (10.0.0.1:60528). Sep 9 05:32:49.199826 sshd[6677]: Accepted publickey for core from 10.0.0.1 port 60528 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:32:49.203389 sshd-session[6677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:32:49.209684 systemd-logind[1560]: New session 29 of user core. Sep 9 05:32:49.217861 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 05:32:49.391747 sshd[6683]: Connection closed by 10.0.0.1 port 60528 Sep 9 05:32:49.392380 sshd-session[6677]: pam_unix(sshd:session): session closed for user core Sep 9 05:32:49.399622 systemd[1]: sshd@29-10.0.0.34:22-10.0.0.1:60528.service: Deactivated successfully. Sep 9 05:32:49.402528 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 05:32:49.405089 systemd-logind[1560]: Session 29 logged out. Waiting for processes to exit. Sep 9 05:32:49.406890 systemd-logind[1560]: Removed session 29. Sep 9 05:32:49.436280 containerd[1581]: time="2025-09-09T05:32:49.436210659Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50\" id:\"2ef10478e2df957742e157285dfbf3fd3417c632e79039129aed830f59e3b878\" pid:6708 exited_at:{seconds:1757395969 nanos:435798367}" Sep 9 05:32:50.247572 kubelet[2782]: E0909 05:32:50.247247 2782 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 05:32:50.995039 containerd[1581]: time="2025-09-09T05:32:50.994986547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68e40676192fa04606f5b45e518faf935d40c5255d2079ff75ebb63467725d50\" id:\"0ad299b575b888b97be62d472abe534a4f21aaddbc89b9fc793c9c02796386fa\" pid:6732 exited_at:{seconds:1757395970 nanos:994740411}"